From Andrew.Magill at exelisvis.com Fri Mar 1 02:29:13 2013 From: Andrew.Magill at exelisvis.com (Andrew Magill) Date: Thu, 28 Feb 2013 18:29:13 -0700 Subject: [Libav-user] Misidentification of data stream as AAC_LATM Message-ID: <0A35EA74E38BA04787179D409AF0CB3F049AA560@coexch01.rsinc.com> I'm working on improving my software's video functionality and upgrading it to support the latest versions of FFmpeg, and I've run into a little problem. One of my requirements is to be able to return the raw packet data from any data streams in a video file, but the newer versions of FFmpeg are misidentifying my test video's data streams as audio AAC_LATM. I believe the culprit is commits 47818b2a ("Add LOAS demuxer.") and 7bdc5de3 ("Autodetect LOAS in transport streams.") on 2011-08-19. See output from ffprobe below- failing case from a newer version first, then passing case in an older version. It looks like the new LOAS demuxer's probe function is stumbling across extremely poor evidence that it can decode my stream, and is reporting that as a confidence of 1. Could it be considered a bug that LOAS is being a little overzealous by misidentifying my stream with minimum confidence? It seems better for the probes to fail than to pass with such a poor match. Looking at the probe function a little closer, it seems that any stream that contains a 16-bit value 0x56E0-0x56FF at any byte alignment will result in a confidence of at least 1. The odds of encountering that sequence in random data is about 1/2048, so with probe sizes of 2500 bytes, odds are better than half that it will return a confidence of 1 rather than 0. Might I suggest eliminating the (max_frames>=1) condition, or increasing its threshold to 2? ffmpeg-git-276f43b-win32-shared\bin\ffprobe.exe foo.mpg -loglevel debug ffprobe version N-32071-g276f43b, Copyright (c) 2007-2011 the FFmpeg developers built on Aug 23 2011 11:03:02 with gcc 4.6.1 configuration: --disable-static --enable-shared --disable-outdev=sdl --enable-gpl --enable-version3 --enable-memalign-hack --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib libavutil 51. 13. 0 / 51. 13. 0 libavcodec 53. 11. 0 / 53. 11. 0 libavformat 53. 9. 0 / 53. 9. 0 libavdevice 53. 3. 0 / 53. 3. 0 libavfilter 2. 34. 2 / 2. 34. 2 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 [mpegts @ 0060F100] Format mpegts probed with size=2048 and score=100 [mpegts @ 0060F100] stream=0 stream_type=1b pid=21 prog_reg_desc= [mpegts @ 0060F100] stream=1 stream_type=15 pid=1f2 prog_reg_desc= [h264 @ 01D6A5A0] Unsupported bit depth: 0 [mpegts @ 0060F100] Continuity Check Failed [h264 @ 01D6A5A0] no picture [mpegts @ 0060F100] probing stream 1 pp:2500 [mpegts @ 0060F100] probing stream 1 pp:2499 [mpegts @ 0060F100] Probe with size=2299, packets=2 detected loas with score=1 [mpegts @ 0060F100] probing stream 1 pp:2498 [mpegts @ 0060F100] probing stream 1 pp:2497 [mpegts @ 0060F100] Probe with size=4601, packets=4 detected loas with score=1 [mpegts @ 0060F100] probed stream 1 [mpegts @ 0060F100] max_analyze_duration 5000000 reached at 5000000 [NULL @ 01D6C2A0] start time is not set in estimate_timings_from_pts [mpegts @ 0060F100] Continuity Check Failed Input #0, mpegts, from 'foo.mpg': Duration: 00:00:46.38, start: 0.572222, bitrate: 6223 kb/s Program 1 Stream #0.0[0x21], 302, 1/90000: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p, 1280x720 [SAR 10:11 DAR 160:99], 1/120, 60 fps, 60 tbr, 90k tbn, 120 tbc Stream #0.1[0x1f2], 0, 1/90000: Audio: aac_latm ([21][0][0][0] / 0x0015), 0 channels [aac_latm @ 01D6C2A0] Unsupported bit depth: 0 ffmpeg-git-41bf67d-win32-shared\bin\ffprobe.exe foo.mpg -loglevel debug ffprobe version N-31932-g41bf67d, Copyright (c) 2007-2011 the FFmpeg developers built on Aug 16 2011 18:55:50 with gcc 4.6.1 configuration: --disable-static --enable-shared --disable-outdev=sdl --enable-gpl --enable-version3 --enable-memalign-hack --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib libavutil 51. 12. 0 / 51. 12. 0 libavcodec 53. 10. 0 / 53. 10. 0 libavformat 53. 7. 0 / 53. 7. 0 libavdevice 53. 3. 0 / 53. 3. 0 libavfilter 2. 31. 1 / 2. 31. 1 libswscale 2. 0. 0 / 2. 0. 0 libpostproc 51. 2. 0 / 51. 2. 0 [mpegts @ 006FF100] Format mpegts probed with size=2048 and score=100 [mpegts @ 006FF100] stream=0 stream_type=1b pid=21 prog_reg_desc= [mpegts @ 006FF100] stream=1 stream_type=15 pid=1f2 prog_reg_desc= [h264 @ 0060A5A0] Unsupported bit depth: 0 [mpegts @ 006FF100] Continuity Check Failed [h264 @ 0060A5A0] no picture [mpegts @ 006FF100] probing stream 1 pp:2500 [mpegts @ 006FF100] probing stream 1 pp:2499 [mpegts @ 006FF100] probing stream 1 pp:2498 [mpegts @ 006FF100] probing stream 1 pp:2497 [mpegts @ 006FF100] probed stream 1 failed [mpegts @ 006FF100] max_analyze_duration 5000000 reached at 5000000 [mpegts @ 006FF100] Continuity Check Failed Input #0, mpegts, from 'foo.mpg': Duration: 00:00:46.95, start: 0.000000, bitrate: 6147 kb/s Program 1 Stream #0.0[0x21], 302, 1/90000: Video: h264 (Main), yuv420p, 1280x720 [SAR 10:11 DAR 160:99], 1/120, 60 fps, 60 tbr, 90k tbn, 120 tbc Stream #0.1[0x1f2], 3, 1/90000: Data: [21][0][0][0] / 0x0015 Unsupported codec with id 0 for input stream 1 Thanks! Andrew Magill From cehoyos at ag.or.at Fri Mar 1 08:55:54 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Fri, 1 Mar 2013 07:55:54 +0000 (UTC) Subject: [Libav-user] =?utf-8?q?Misidentification_of_data_stream_as_AAC=5F?= =?utf-8?q?LATM?= References: <0A35EA74E38BA04787179D409AF0CB3F049AA560@coexch01.rsinc.com> Message-ID: Andrew Magill writes: > Might I suggest eliminating the (max_frames>=1) condition As in http://git.videolan.org/?p=ffmpeg.git;a=commit;h=a60530e ? Carl Eugen From cehoyos at ag.or.at Fri Mar 1 09:54:59 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Fri, 1 Mar 2013 08:54:59 +0000 (UTC) Subject: [Libav-user] =?utf-8?b?V2h5IChjLT5mcmFtZV9zaXplID0gMCkgPyAtPiBD?= =?utf-8?q?ould_not_allocate_-22_bytes_for_samples_buffer?= References: Message-ID: Joe Flowers writes: > The only thing I did was change the line > codec = avcodec_find_encoder(AV_CODEC_ID_MP2); > to > codec = avcodec_find_encoder(AV_CODEC_ID_PCM_S16LE); > When I run "./decoding_encoding mp2", it ends with: "Could not > allocate -22 bytes for samples buffer". For PCM codecs, frame_size is 0 and calling av_samples_get_buffer_size() makes no sense. av_samples_get_buffer_size() returns AVERROR(EINVAL) which happens to be "-22". Carl Eugen From joe.flowers at nofreewill.com Fri Mar 1 11:57:33 2013 From: joe.flowers at nofreewill.com (Joe Flowers) Date: Fri, 1 Mar 2013 05:57:33 -0500 Subject: [Libav-user] Why (c->frame_size = 0) ? -> Could not allocate -22 bytes for samples buffer In-Reply-To: References: Message-ID: > For PCM codecs, frame_size is 0 and calling > av_samples_get_buffer_size() makes no sense. > av_samples_get_buffer_size() returns AVERROR(EINVAL) > which happens to be "-22". Thanks Carl! This is immensely helpful! From joe.flowers at nofreewill.com Fri Mar 1 12:56:19 2013 From: joe.flowers at nofreewill.com (Joe Flowers) Date: Fri, 1 Mar 2013 06:56:19 -0500 Subject: [Libav-user] Why (c->frame_size = 0) ? -> Could not allocate -22 bytes for samples buffer In-Reply-To: References: Message-ID: > For PCM codecs, frame_size is 0 and calling > av_samples_get_buffer_size() makes no sense. > av_samples_get_buffer_size() returns AVERROR(EINVAL) > which happens to be "-22". Does this mean that when encoding raw s16le data to AV_CODEC_ID_PCM_S16LE that I should expect a pkt.size = 0 too, even though the pkt.data have the correctly encoded data in it? Thanks! From cehoyos at ag.or.at Fri Mar 1 15:36:15 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Fri, 1 Mar 2013 14:36:15 +0000 (UTC) Subject: [Libav-user] =?utf-8?b?V2h5IChjLT5mcmFtZV9zaXplID0gMCkgPyAtPiBD?= =?utf-8?q?ould_not_allocate_-22_bytes_for_samples_buffer?= References: Message-ID: Joe Flowers writes: > Does this mean that when encoding raw s16le data to > AV_CODEC_ID_PCM_S16LE that I should expect a pkt.size = 0 too, even > though the pkt.data have the correctly encoded data in it? I did not test but this sounds unlikely / impossible to me. Carl Eugen From joe.flowers at nofreewill.com Fri Mar 1 15:51:58 2013 From: joe.flowers at nofreewill.com (Joe Flowers) Date: Fri, 1 Mar 2013 09:51:58 -0500 Subject: [Libav-user] Why (c->frame_size = 0) ? -> Could not allocate -22 bytes for samples buffer In-Reply-To: References: Message-ID: Thanks Carl! This is the point at which I am at today. Hopefully this thing will start working today. ret=pcm_encode_frame(avctx, &avpkt, frame, &got_packet_ptr); At this point, ret=0, got_packet_ptr=1, avpkt.data has something in it - not sure if it's garbage or not yet, BUT avpkt.size is 0. On Fri, Mar 1, 2013 at 9:36 AM, Carl Eugen Hoyos wrote: > Joe Flowers writes: > >> Does this mean that when encoding raw s16le data to >> AV_CODEC_ID_PCM_S16LE that I should expect a pkt.size = 0 too, even >> though the pkt.data have the correctly encoded data in it? > > I did not test but this sounds unlikely / impossible to me. From steve.hart at rtsw.co.uk Fri Mar 1 16:13:01 2013 From: steve.hart at rtsw.co.uk (Steve Hart) Date: Fri, 1 Mar 2013 15:13:01 +0000 Subject: [Libav-user] Record, seek and play Message-ID: Hi We are investigating the possibility of using ffmpeg to record and playback video Could anyone tell me if it is possible to play back and seek within a file that is recording at the same time. So If we have a program that starts capturing video from an external source, encodes and records to disk and at the same time, play the video to screen and (most importantly) seek within it. (jog/shuttle/ffw etc). Is this possible? Any caveats? Any comments gratefully received. Cheers Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Fri Mar 1 16:58:25 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Fri, 1 Mar 2013 15:58:25 +0000 (UTC) Subject: [Libav-user] Record, seek and play References: Message-ID: Steve Hart writes: > Could anyone tell me if it is possible to play back and > seek within a file that is recording at the same time. That depends on the file format you are using. (And maybe on the operating system.) Carl Eugen From steve.hart at rtsw.co.uk Fri Mar 1 17:35:10 2013 From: steve.hart at rtsw.co.uk (Steve Hart) Date: Fri, 1 Mar 2013 16:35:10 +0000 Subject: [Libav-user] Record, seek and play In-Reply-To: References: Message-ID: On 1 March 2013 15:58, Carl Eugen Hoyos wrote: > Steve Hart writes: > > > Could anyone tell me if it is possible to play back and > > seek within a file that is recording at the same time. > > That depends on the file format you are using. > (And maybe on the operating system.) > > Carl Eugen > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > Hi Linux/Win7 Formats would be mainly mpeg2/dv/h264 I've done some testing and it seems that we can play OK and record OK separately. But If I try to play back a file that is currently recording I can only seek to a certain point but no further. We get stuck in av_read_frame. i.e. we open a stream to record and start recording to it. We then open another stream to play the same file back. In addition - with DV for example, I get Error: failed to map EditUnit -1 in IndexSID 1 to an offset Error: Truncating packet of size 576000 to 575469 I guess what I am asking is, is this even possible? Cheers Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From julian.herrera at tvgenius.net Fri Mar 1 17:35:28 2013 From: julian.herrera at tvgenius.net (Julian Herrera) Date: Fri, 01 Mar 2013 16:35:28 +0000 Subject: [Libav-user] Record, seek and play In-Reply-To: References: Message-ID: <5130D8D0.6010208@tvgenius.net> On 01/03/2013 15:13, Steve Hart wrote: > Hi > We are investigating the possibility of using ffmpeg to record and > playback video > Could anyone tell me if it is possible to play back and seek within a > file that is recording > at the same time. > So > If we have a program that starts capturing video from an external > source, encodes and records to disk > and at the same time, play the video to screen and (most importantly) > seek within it. > (jog/shuttle/ffw etc). > > Is this possible? > Any caveats? > I did something similar a few days ago. I used the Live555 library to dump an mpegts stream into a local temp file from an RTSP server. Then I used libav to play that file at the same time on different threads and everything worked fine. I guess you just have to be sure of not crossing the boundaries of the temporary file. Julian Herrera From steve.hart at rtsw.co.uk Fri Mar 1 17:41:42 2013 From: steve.hart at rtsw.co.uk (Steve Hart) Date: Fri, 1 Mar 2013 16:41:42 +0000 Subject: [Libav-user] Record, seek and play In-Reply-To: <5130D8D0.6010208@tvgenius.net> References: <5130D8D0.6010208@tvgenius.net> Message-ID: On 1 March 2013 16:35, Julian Herrera wrote: > On 01/03/2013 15:13, Steve Hart wrote: > >> Hi >> We are investigating the possibility of using ffmpeg to record and >> playback video >> Could anyone tell me if it is possible to play back and seek within a >> file that is recording >> at the same time. >> So >> If we have a program that starts capturing video from an external >> source, encodes and records to disk >> and at the same time, play the video to screen and (most importantly) >> seek within it. >> (jog/shuttle/ffw etc). >> >> Is this possible? >> Any caveats? >> >> > I did something similar a few days ago. I used the Live555 library to dump > an mpegts stream into a local temp file from an RTSP server. Then I used > libav to play that file at the same time on different threads and > everything worked fine. I guess you just have to be sure of not crossing > the boundaries of the temporary file. > > Julian Herrera > > > > > ______________________________**_________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/**listinfo/libav-user > The issue here is seeking I think. I can open the currently recording file and play it back fine - but as soon as I seek, I get errors/hangs. I ensure that any seek forward is clamped at 2 secs before the current record. I should mention this is dv50 into mxf wrap. Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From Andrew.Magill at exelisvis.com Fri Mar 1 18:12:19 2013 From: Andrew.Magill at exelisvis.com (Andrew Magill) Date: Fri, 1 Mar 2013 10:12:19 -0700 Subject: [Libav-user] Misidentification of data stream as AAC_LATM In-Reply-To: References: <0A35EA74E38BA04787179D409AF0CB3F049AA560@coexch01.rsinc.com> Message-ID: <0A35EA74E38BA04787179D409AF0CB3F04A22C2A@coexch01.rsinc.com> ..why, yes. Precisely like that. Thank you. -Andrew Magill -----Original Message----- From: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] On Behalf Of Carl Eugen Hoyos Sent: Friday, March 01, 2013 12:56 AM To: libav-user at ffmpeg.org Subject: Re: [Libav-user]Misidentification of data stream as AAC_LATM Andrew Magill writes: > Might I suggest eliminating the (max_frames>=1) condition As in http://git.videolan.org/?p=ffmpeg.git;a=commit;h=a60530e ? Carl Eugen _______________________________________________ Libav-user mailing list Libav-user at ffmpeg.org http://ffmpeg.org/mailman/listinfo/libav-user From rjvbertin at gmail.com Fri Mar 1 18:49:10 2013 From: rjvbertin at gmail.com (RenE J.V. Bertin) Date: Fri, 1 Mar 2013 18:49:10 +0100 Subject: [Libav-user] Record, seek and play In-Reply-To: References: <5130D8D0.6010208@tvgenius.net> Message-ID: <1612811298889833755@unknownmsgid> > > What if you buffer in memory by using a custom AvIO context or however that's called? At least you be able to catch seeks that go where they shouldn't ... and it might be less platform specific? R From steve.hart at rtsw.co.uk Sat Mar 2 17:48:56 2013 From: steve.hart at rtsw.co.uk (Steve Hart) Date: Sat, 2 Mar 2013 16:48:56 +0000 Subject: [Libav-user] Record, seek and play In-Reply-To: <1612811298889833755@unknownmsgid> References: <5130D8D0.6010208@tvgenius.net> <1612811298889833755@unknownmsgid> Message-ID: On 1 March 2013 17:49, RenE J.V. Bertin wrote: > > > > > What if you buffer in memory by using a custom AvIO context or however > that's called? At least you be able to catch seeks that go where they > shouldn't ... and it might be less platform specific? > > R > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > The problem is that the recording could be hours long! The problem seems to be down to the fact that some formats don't write the header info until the file is closed. This is certainly true of mxf'd DV. I debugged into the mxf muxer and found that duration was 0 - no surprise really..... I do have more success is I seek using AV_SEEK_BYTE rather than AV_SEEK_FRAME which I am looking at now. I know some solutions record off 'chunks' of video and store the latest x mins in memory. Then have an algorithm that loads the relevant chunk ahead of time. But a bit laborious and I'd end up with a bunch of small files. Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From krueger at lesspain.de Sat Mar 2 18:04:20 2013 From: krueger at lesspain.de (=?UTF-8?Q?Robert_Kr=C3=BCger?=) Date: Sat, 2 Mar 2013 18:04:20 +0100 Subject: [Libav-user] Record, seek and play In-Reply-To: References: <5130D8D0.6010208@tvgenius.net> <1612811298889833755@unknownmsgid> Message-ID: On Sat, Mar 2, 2013 at 5:48 PM, Steve Hart wrote: > On 1 March 2013 17:49, RenE J.V. Bertin wrote: >> >> > >> > >> What if you buffer in memory by using a custom AvIO context or however >> that's called? At least you be able to catch seeks that go where they >> shouldn't ... and it might be less platform specific? >> >> R >> _______________________________________________ >> Libav-user mailing list >> Libav-user at ffmpeg.org >> http://ffmpeg.org/mailman/listinfo/libav-user > > > The problem is that the recording could be hours long! > The problem seems to be down to the fact that some formats don't write the > header info > until the file is closed. This is certainly true of mxf'd DV. I debugged > into the mxf muxer > and found that duration was 0 - no surprise really..... > > I do have more success is I seek using AV_SEEK_BYTE rather than > AV_SEEK_FRAME > which I am looking at now. > > I know some solutions record off 'chunks' of video and store the latest > x mins in memory. Then have an algorithm that loads the relevant chunk ahead > of time. > But a bit laborious and I'd end up with a bunch of small files. > > Steve You could take a look at fragmented mp4. I'm not sure how well ffmpeg seeking works with it but at least in theory you should be able to produce a file where you have control over how much is available for seeking while you record by setting fragment_duration (or size) accordingly (fragmented means a header is written every X bytes or seconds). Seek performance would be worse than that of an mp4 file with one header (aka moov box/atom) because when you seek it works like a linked list jumping from fragment to fragment but it very well be may be fast enough for your use case. Would be worth a try. Again, I have not personally tested this with ffmpeg seeking API. It may or may not be implemented for that demuxer. If it is not, however, you would have the option to offer a bounty to someone to add that functionality to the mov/mp4 demuxer. Robert From rjvbertin at gmail.com Sat Mar 2 22:20:10 2013 From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=) Date: Sat, 2 Mar 2013 22:20:10 +0100 Subject: [Libav-user] Record, seek and play In-Reply-To: References: <5130D8D0.6010208@tvgenius.net> <1612811298889833755@unknownmsgid> Message-ID: On Mar 02, 2013, at 17:48, Steve Hart wrote: > The problem is that the recording could be hours long! Of course I wasn't suggesting to cache more than you can fit in a reasonable amount of memory, but if I had to guess I'd say that far, backward searches are less likely to cause issues? Anyway, maybe it's enough to cache the information on where to find the content (frames, a bit like how QuickTime handles reference tracks), rather than the content itself? > The problem seems to be down to the fact that some formats don't write the header info > until the file is closed. This is certainly true of mxf'd DV. I debugged into the mxf muxer > and found that duration was 0 - no surprise really..... But you're not obliged to use that format, or support seeking in a format that doesn't support it (when there's the choice to use a format that does)? R. From nhanndt_87 at yahoo.com Mon Mar 4 07:07:13 2013 From: nhanndt_87 at yahoo.com (thanh nhan thanh nhan) Date: Sun, 3 Mar 2013 22:07:13 -0800 (PST) Subject: [Libav-user] libav-user list post! Message-ID: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> Dear libav-user list admin, I wanna post the following message on thi mailing list: ? I am developing?pcm decoding using ffmpeg libraries in Linux. I successfully built my program. The problem is the executable file size is very big ( ~ 63Mb) even though i just used few functions on audio decoding. I wanna downsize the executable file as much as possible. I think the problem is on the linking process. I made the Makefile with: ???? "g++ -L/usr/lib64 -L/usr/local/lib -g -o x2pcm main.o util.o -lavformat -lavcodec -lavutil -lswresample -lvorbis -lx264 -lmp3lame -logg -lvorbisenc -lvpx -lpthread -lz" Is there any suggestion for me to solve this problem? Thanks in advance! ? Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Mon Mar 4 09:13:45 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Mon, 4 Mar 2013 08:13:45 +0000 (UTC) Subject: [Libav-user] libav-user list post! References: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> Message-ID: thanh nhan thanh nhan writes: > I am developing?pcm decoding using ffmpeg libraries in Linux. > I successfully built my program. The problem is the executable > file size is very big ( ~ 63Mb) even though i just used few > functions on audio decoding. I wanna downsize the executable > file as much as possible. Please provide your current FFmpeg configure line / start with --disable-all and enable everything you need. Carl Eugen From rjvbertin at gmail.com Mon Mar 4 09:22:07 2013 From: rjvbertin at gmail.com (RenE J.V. Bertin) Date: Mon, 4 Mar 2013 09:22:07 +0100 Subject: [Libav-user] libav-user list post! In-Reply-To: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> References: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> Message-ID: <-1226576938012469024@unknownmsgid> On 4 Mar 2013, at 07:07, thanh nhan thanh nhan wrote: Dear libav-user list admin, I wanna post the following message on thi mailing list: *I am developing pcm decoding using ffmpeg libraries in Linux. I successfully built my program. The problem is the executable file size is very big ( ~ 63Mb) even though i just used few functions on audio decoding. I wanna downsize the executable file as much as possible. I think the problem is on the linking process. I made the Makefile with:* * "g++ -L/usr/lib64 -L/usr/local/lib -g -o x2pcm main.o util.o -lavformat -lavcodec -lavutil -lswresample -lvorbis -lx264 -lmp3lame -logg -lvorbisenc -lvpx -lpthread -lz"* *Is there any suggestion for me to solve this problem?* *Thanks in advance!* ** Regards, ______________________________ What is the size you get when linking without the -g option? Also, are you using static or shared libraries, and why are you linking in video libraries if you're only doing decoding? R -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Mon Mar 4 09:29:25 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Mon, 4 Mar 2013 08:29:25 +0000 (UTC) Subject: [Libav-user] libav-user list post! References: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> <-1226576938012469024@unknownmsgid> Message-ID: RenE J.V. Bertin writes: [...] Please set your mailer to text-only when sending emails to this mailing list, this is how your mails look like: http://ffmpeg.org/pipermail/libav-user/2013-March/003903.html (This gets completely unreadable after the second reply.) Carl Eugen From rjvbertin at gmail.com Mon Mar 4 09:38:42 2013 From: rjvbertin at gmail.com (RenE J.V. Bertin) Date: Mon, 4 Mar 2013 09:38:42 +0100 Subject: [Libav-user] libav-user list post! In-Reply-To: References: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> <-1226576938012469024@unknownmsgid> Message-ID: <-3117779901049392906@unknownmsgid> On 4 Mar 2013, at 09:29, Carl Eugen Hoyos wrote: > (This gets completely unreadable after the second reply.) Sorry, impossible with this client ... but what happens 2 replies after one posts a message is hardly my responsibility (and suggesting it is is just as rude as top-posting AFAIC). From cehoyos at ag.or.at Mon Mar 4 09:41:38 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Mon, 4 Mar 2013 08:41:38 +0000 (UTC) Subject: [Libav-user] libav-user list post! References: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> <-1226576938012469024@unknownmsgid> <-3117779901049392906@unknownmsgid> Message-ID: RenE J.V. Bertin writes: > On 4 Mar 2013, at 09:29, Carl Eugen Hoyos wrote: > > > (This gets completely unreadable after the second reply.) > > Sorry, impossible with this client ... Then please use another mail client. > but what happens 2 replies after one posts a message > is hardly my responsibility You misunderstand: I did not mean that your mail is readable and only gets unreadable if somebody replies but that your email is already unreadable (for some reason you removed the link above) and it will not get better after the next reply. Carl Eugen From rjvbertin at gmail.com Mon Mar 4 09:52:44 2013 From: rjvbertin at gmail.com (RenE J.V. Bertin) Date: Mon, 4 Mar 2013 09:52:44 +0100 Subject: [Libav-user] libav-user list post! In-Reply-To: References: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> <-1226576938012469024@unknownmsgid> <-3117779901049392906@unknownmsgid> Message-ID: <5640343887285273058@unknownmsgid> I take the trouble trying to help someone while working off an embedded system and all you can do is bitch because said system happens to be a bit more advanced than your archiving software can handle? For goodness' sake, there usn't even a need to quote the message I posted. Have you ever wondered why there are so few users helping out other users?! Don't worry, you won't get another reply from me in this thread and I may well wait until I really have nowhere else to ask a question before posting again. From nhanndt_87 at yahoo.com Mon Mar 4 09:57:48 2013 From: nhanndt_87 at yahoo.com (thanh nhan thanh nhan) Date: Mon, 4 Mar 2013 00:57:48 -0800 (PST) Subject: [Libav-user] libav-user list post! In-Reply-To: <-1226576938012469024@unknownmsgid> References: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> <-1226576938012469024@unknownmsgid> Message-ID: <1362387468.68247.YahooMailNeo@web121705.mail.ne1.yahoo.com> Dear all, I configured like this: ????./configure --enable-gpl --enable-libfdk_aac --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 that i followed the link : http://ffmpeg.org/trac/ffmpeg/wiki/CentosCompilationGuide. I got the same size when linking without -g option. I also tried with static and dynamic link but the result is still same. All of ffmpeg functions i used come from 2 examples decoding_encoding.c and resampling_audio.c provided in /doc/example directory of ffmpeg. whenever i remove any library specified in the linking command, the errors appear. For example: If i remove -lavcodec, the error is: g++ -o x2pcm main.o util.o -L/usr/lib64 -L/usr/local/lib -Wl,-Bstatic -lavformat -lavutil -lswresample -Wl,-Bdynamic -lvorbis -lx264 -lmp3lame -logg -lvorbisenc -lvpx -lz -lpthread /usr/local/lib/libavformat.a(rl2.o): In function `rl2_read_packet': /home/nhanndt/ffmpeg-source/ffmpeg/libavformat/rl2.c:245: undefined reference to `av_free_packet' /usr/local/lib/libavformat.a(sbgdec.o): In function `sbg_read_packet': /home/nhanndt/ffmpeg-source/ffmpeg/libavformat/sbgdec.c:1456: undefined reference to `av_new_packet' /usr/local/lib/libavformat.a(swfdec.o): In function `swf_read_packet': /home/nhanndt/ffmpeg-source/ffmpeg/libavformat/swfdec.c:440: undefined reference to `av_new_packet' /home/nhanndt/ffmpeg-source/ffmpeg/libavformat/swfdec.c:352: undefined reference to `av_new_packet' /home/nhanndt/ffmpeg-source/ffmpeg/libavformat/swfdec.c:363: undefined reference to `av_packet_new_side_data' and so on... ________________________________ From: RenE J.V. Bertin To: "This list is about using libavcodec, libavformat, libavutil, libavdevice and libavfilter." Sent: Monday, March 4, 2013 5:22 PM Subject: Re: [Libav-user] libav-user list post! On 4 Mar 2013, at 07:07, thanh nhan thanh nhan wrote: Dear libav-user list admin, >I wanna post the following message on thi mailing list: > >I am developing?pcm decoding using ffmpeg libraries in Linux. I successfully built my program. The problem is the executable file size is very big ( ~ 63Mb) even though i just used few functions on audio decoding. I wanna downsize the executable file as much as possible. I think the problem is on the linking process. I made the Makefile with: >???? "g++ -L/usr/lib64 -L/usr/local/lib -g -o x2pcm main.o util.o -lavformat -lavcodec -lavutil -lswresample -lvorbis -lx264 -lmp3lame -logg -lvorbisenc -lvpx -lpthread -lz" >Is there any suggestion for me to solve this problem? >Thanks in advance! >? >Regards, ______________________________ > What is the size you get when linking without the -g option? Also, are you using static or shared libraries, and why are you linking in video libraries if you're only doing decoding? R _______________________________________________ Libav-user mailing list Libav-user at ffmpeg.org http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Mon Mar 4 10:16:08 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Mon, 4 Mar 2013 09:16:08 +0000 (UTC) Subject: [Libav-user] libav-user list post! References: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> <-1226576938012469024@unknownmsgid> <1362387468.68247.YahooMailNeo@web121705.mail.ne1.yahoo.com> Message-ID: thanh nhan thanh nhan writes: > I configured like this: > ./configure --enable-gpl --enable-libfdk_aac > --enable-libmp3lame --enable-libtheora > --enable-libvorbis --enable-libvpx --enable-libx264 This includes all default features of FFmpeg (and a few external libraries). If you want smaller libraries, you have to change your FFmpeg configure line (it cannot help if you change something about your linking command for your own application x2pcm, that cannot work!) - since you did not explain exactly which features you need, I can only guess but I suggest you start with a line like the following and test, you will find many missing features: $ ./configure --disable-everything --enable-protocol=file --enable-demuxer=aac,mp3 --enable-parser=aac,mpegaudio --enable-decoder=aac,mp3 --enable-encoder=pcm_s16le --enable-muxer=wav,pcm_s16le Please understand that this is certainly not exactly what you need, it is meant to help finding the right options. You can further decrease the size of the libraries by using --disable-all instead of --disable-everything but you need to enable even more features in that case. --disable-debug should not be necessary since I expect you will strip your final executable anyway, but you can of course add it to configure as well. Please google for "top-posting" and please avoid it here, it is considered rude. Carl Eugen From nhanndt_87 at yahoo.com Mon Mar 4 11:01:04 2013 From: nhanndt_87 at yahoo.com (thanh nhan thanh nhan) Date: Mon, 4 Mar 2013 02:01:04 -0800 (PST) Subject: [Libav-user] libav-user list post! In-Reply-To: References: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> <-1226576938012469024@unknownmsgid> <1362387468.68247.YahooMailNeo@web121705.mail.ne1.yahoo.com> Message-ID: <1362391264.92695.YahooMailNeo@web121702.mail.ne1.yahoo.com> >________________________________ > From: Carl Eugen Hoyos >To: libav-user at ffmpeg.org >Sent: Monday, March 4, 2013 6:16 PM >Subject: Re: [Libav-user] libav-user list post! >?? >thanh nhan thanh nhan writes: > >> I configured like this: >> ./configure --enable-gpl --enable-libfdk_aac >> --enable-libmp3lame --enable-libtheora >> --enable-libvorbis --enable-libvpx --enable-libx264 > >This includes all default features of FFmpeg (and a few >external libraries). >If you want smaller libraries, you have to change your >FFmpeg configure line (it cannot help if you change >something about your linking command for your own >application x2pcm, that cannot work!) - since you did >not explain exactly which features you need, I can >only guess but I suggest you start with a line like >the following and test, you will find many missing >features: >$ ./configure --disable-everything --enable-protocol=file >--enable-demuxer=aac,mp3 --enable-parser=aac,mpegaudio >--enable-decoder=aac,mp3 --enable-encoder=pcm_s16le >--enable-muxer=wav,pcm_s16le > >Please understand that this is certainly not exactly >what you need, it is meant to help finding the right >options. >You can further decrease the size of the libraries by >using --disable-all instead of --disable-everything >but you need to enable even more features in that case. > >--disable-debug should not be necessary since I expect >you will strip your final executable anyway, but you >can of course add it to configure as well. > >Please google for "top-posting" and please avoid it here, >it is considered rude. > >Carl Eugen > >_______________________________________________ >Libav-user mailing list >Libav-user at ffmpeg.org >http://ffmpeg.org/mailman/listinfo/libav-user > > >?? Hi Carl Eugen, Sorry for "top-posting" problem. I changed my style posting. I dont know if it is acceptable or not. I saw that the ffmpeg library files such as libavcodec.a, libavformat.a....are huge-size files. When we link them into executable file, are they attached into executable file with the whole size?no matter which ffmpeg function we use??? I understood what you typed. So, We cannot remove any library in link command. What we can do is downsize the libraries. Am i right? All of features i need is: decoding the aac, mp3, ogg, flac, wma, m4a, 3ga, wav into pcm file and then resample pcm file into expected sample format, number of channel and sampling rate. Could you pls recommend me the configuration command? From cehoyos at ag.or.at Mon Mar 4 11:12:10 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Mon, 4 Mar 2013 10:12:10 +0000 (UTC) Subject: [Libav-user] libav-user list post! References: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> <-1226576938012469024@unknownmsgid> <1362387468.68247.YahooMailNeo@web121705.mail.ne1.yahoo.com> <1362391264.92695.YahooMailNeo@web121702.mail.ne1.yahoo.com> Message-ID: thanh nhan thanh nhan writes: > >$ ./configure --disable-everything --enable-protocol=file > >--enable-demuxer=aac,mp3 --enable-parser=aac,mpegaudio > >--enable-decoder=aac,mp3 --enable-encoder=pcm_s16le > >--enable-muxer=wav,pcm_s16le [...] > So, We cannot remove any library in link command. > What we can do is downsize the libraries. Am i right? Yes. > All of features i need is: decoding the aac, mp3, > ogg, flac, wma, m4a, 3ga, wav Then please try to add them to the configure line I posted above and start testing. The configure options --list-decoders, --list-parsers etc. will help you. If there is a feature / decoder / demuxer that you fail to add, ask again! > into pcm file and then resample pcm file into > expected sample format, number of channel and > sampling rate. Add --enable-filter=aresample to allow resampling etc. Carl Eugen From alexcohn at netvision.net.il Mon Mar 4 13:02:34 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Mon, 4 Mar 2013 14:02:34 +0200 Subject: [Libav-user] libav-user list post! In-Reply-To: <1362391264.92695.YahooMailNeo@web121702.mail.ne1.yahoo.com> References: <1362377233.50418.YahooMailNeo@web121702.mail.ne1.yahoo.com> <-1226576938012469024@unknownmsgid> <1362387468.68247.YahooMailNeo@web121705.mail.ne1.yahoo.com> <1362391264.92695.YahooMailNeo@web121702.mail.ne1.yahoo.com> Message-ID: On Mon, Mar 4, 2013 at 12:01 PM, thanh nhan thanh nhan wrote: > I saw that the ffmpeg library files such as libavcodec.a, libavformat.a....are huge-size files. When we link them into executable file, are they attached into executable file with the whole size no matter which ffmpeg function we use??? No, the linker resolves the necessary parts only. But the dependencies between ffmpeg components are sometimes built into the compiled code. That's why you can achieve significant gain if you compile the libraries with fewer features enabled. BR, Alex From ehouitte at yacast.fr Wed Mar 6 14:48:43 2013 From: ehouitte at yacast.fr (Emmanuel HOUITTE) Date: Wed, 6 Mar 2013 14:48:43 +0100 Subject: [Libav-user] How to have constant encoded file's size ? Message-ID: <7C0B03E609BB5B4CBF496040DCF482FB011C63CE7D1D@MAILBOX01.yacast.fr> I would like to do CBR H264/AAC with libav like the following command line, is it possible? I need file with constant size. To set the bitrate seems not to work. pAVCodecContext->bit_rate = video_bitrate; ffmpeg.exe -i D:\test\conv256\20130211_043000.ts -acodec libfaac -vcodec libx264 -s 360x288 -vb 256k -ab 32k D:\test\conv256\20130211_043000_3.mp4 ffmpeg version 0.7-rc1, Copyright (c) 2000-2011 the FFmpeg developers built on May 30 2012 16:33:26 with gcc 4.6.2 configuration: --enable-shared --disable-static --enable-memalign-hack --enable-nonfree --enable-l ibmp3lame --disable-debug --enable-libfaac --enable-libx264 --enable-librtmp --build-suffix=20120530 --enable-gpl --prefix=/2012-05-30 --extra-cflags=-I/local/include --extra-ldflags=-L/local/lib --ta rget-os=mingw32 libavutil 50. 40. 1 / 50. 40. 1 libavcodec 52.120. 0 / 52.120. 0 libavformat 52.108. 0 / 52.108. 0 libavdevice 52. 4. 0 / 52. 4. 0 libavfilter 1. 77. 0 / 1. 77. 0 libswscale 0. 13. 0 / 0. 13. 0 [mpeg2video @ 01f564c0] mpeg_decode_postinit() failure Last message repeated 1 times [mpegts @ 0098abd0] max_analyze_duration reached Input #0, mpegts, from 'D:\test\conv256\20130211_043000.ts': Duration: 00:05:00.04, start: 55627.633700, bitrate: 6128 kb/s Program 1537 Stream #0.0[0x78]: Video: mpeg2video (Main), yuv420p, 720x576 [PAR 64:45 DAR 16:9], 15000 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc Stream #0.1[0x82](fra): Audio: mp2, 48000 Hz, stereo, s16, 192 kb/s [buffer @ 0261b310] w:720 h:576 pixfmt:yuv420p [scale @ 02433a60] w:720 h:576 fmt:yuv420p -> w:360 h:288 fmt:yuv420p flags:0xa0000004 [libx264 @ 01f55890] broken ffmpeg default settings detected [libx264 @ 01f55890] use an encoding preset (e.g. -vpre medium) [libx264 @ 01f55890] preset usage: -vpre -vpre [libx264 @ 01f55890] speed presets are listed in x264 --help [libx264 @ 01f55890] profile is optional; x264 defaults to high [libx264 @ 01f55890] 264 - core 114 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0x1:0 me=dia subme=8 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=4 chroma_me=0 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 constrained_intra=0 bframes=0 weightp=0 keyint=12 keyint_min=7 scenecut=0 intra_refresh=0 rc_lookahead=12 rc=abr mbtree=1 bitrate=256 ratetol=1.0 qcomp=0.50 qpmin=2 qpmax=31 qpstep=3 ip_ratio=1.25 aq=1:1.00 Output #0, mp4, to 'D:\test\conv256\20130211_043000_3.mp4': Metadata: encoder : Lavf52.108.0 Stream #0.0: Video: libx264, yuv420p, 360x288 [PAR 64:45 DAR 16:9], q=2-31, 256 kb/s, 25 tbn, 25 tbc Stream #0.1(fra): Audio: libfaac, 48000 Hz, stereo, s16, 32 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 370 fps= 97 q=31.0 Lsize= 1487kB time=14.72 bitrate= 827.6kbits/s dup=6 drop=0 video:1362kB audio:115kB global headers:0kB muxing overhead 0.647599% -------------- next part -------------- An HTML attachment was scrubbed... URL: From belkevich at mlsdev.com Wed Mar 6 16:55:03 2013 From: belkevich at mlsdev.com (Alexey Belkevich) Date: Wed, 6 Mar 2013 17:55:03 +0200 Subject: [Libav-user] Decoding audio stream Message-ID: Should I use av_read_frame() and avcodec_decode_audio4() in application main thread (UI thread)? Or it's better to call them in background thread? -- Alexey Belkevich -------------- next part -------------- An HTML attachment was scrubbed... URL: From krueger at lesspain.de Wed Mar 6 17:40:00 2013 From: krueger at lesspain.de (=?UTF-8?Q?Robert_Kr=C3=BCger?=) Date: Wed, 6 Mar 2013 17:40:00 +0100 Subject: [Libav-user] Fastest possible stream parsing In-Reply-To: References: Message-ID: On Thu, Feb 28, 2013 at 12:35 PM, Robert Kr?ger wrote: > On Thu, Feb 28, 2013 at 10:32 AM, Tocy wrote: >> i think you can consider av_parser_parse2 function >> > > That looks indeed promising. I will try that. Thank you! > > Robert I am still trying to understand if this will work by looking at the code and hacking debug output into the code of h264_parser.c and h264.c. I have a sample camera file for which only the first frame is reported as a keyframe when decoding it (AVFrame.key_frame) but for some frames AVCodecParserContext.keyframe is set to 1 and that would absolutely make sense taking into account what I know about the camera's GOP size from other sources. However, the field AVCodecParserContext.convergence_duration is never set to any other value but 0 and judging by its documentation it looks like it is the one I should really use to identify recovery points when parsing the stream. It would be nice if someone with knowledge of ffmpeg's h264 internals could tell me if I can use this to identify recovery points in an h.264 stream or if the inconsistency between AVCodecParserContext.keyframe and AVFrame.key_frame should be fixed or if I'm getting it all wrong. Thanks in advance, Robert From scdmusic at gmail.com Wed Mar 6 21:42:54 2013 From: scdmusic at gmail.com (John Locke) Date: Wed, 6 Mar 2013 15:42:54 -0500 Subject: [Libav-user] avio_alloc_context custom I/O read_packet issue Message-ID: I am trying to set up a custom I/O read callback to read from a std::list of frames I transmitted via TCP. In each std::list contains a frame buffer that was written using a custom I/O write_packet, with an associated frame size and some other information about each frame that I need. I have verified I am getting all of the information by writing out my frame buffers to disk and opening them up in mplayer or vlc. The below appears to work, but after avformat_open_input(&m_FormatCtx,"fake.h264",NULL,NULL); None of the necessary AVContextCodec information appears to be there. It does find the codec based on the fake filename, fake.h264, but does not read anything in from my header, like height or width... Which makes me think it's just reading the fake filename to determine codec info and not reading in my I/O buffer... In my Read_Packet callback I write out the first 32K (all that is read when avformat_open_input is called) and write that out to disk. mplayer cannot play anything (obviously, it's only 32K), but it is able to parse the header and get height and width: VIDEO: [H264] 696x556 0bpp 25.000 fps 0.0 kbps ( 0.0 kbyte/s) I know the buffer has the correct information in it! It should be known that the frame size is larger than 32K, so I manage the position in the buffer with my opaque pointer. Once finished with the frame, I pop it off. static int Read_Packet(void *opaque, uint8_t *buf, int size) { FrameList *l = (FrameList *)opaque; if(l->empty()) return 0; Frame_ptr p = l->front(); int ret; if(p->H264_frameSize - p->H264_framePos < size) { memcpy(buf,p->H264_frameData + p->H264_framePos,p->H264_frameSize - p->H264_framePos); l->pop_front(); ret=p->H264_frameSize - p->H264_framePos; }else { memcpy(buf,p->H264_frameData + p->H264_framePos,size); p->H264_framePos+=size; ret=size; } fwrite(buf, ret, 1, fo); //for debugging... Save out 32K, in which mplayer can read the header return ret; } int main() ..... snip AVIOContext *avioctx; m_BufferCallBack= (unsigned char*)av_malloc(BUFFERSIZE * sizeof(uint8_t)); avioctx = avio_alloc_context( m_BufferCallBack, //IOBuffer BUFFERSIZE, //Buffer Size 0, //Write flag, only reading, so 0 m_H264FrameList, //H264 Frame pointer (opaque) Read_Packet, //Read callback NULL, //Write callback NULL); //file_seek... not using buffer again so 0 is ok m_FormatCtx->pb=avioctx; int ret = avformat_open_input(&m_FormatCtx,"fake.h264",NULL,NULL); if (ret < 0) printf("Cannot open buffer in libavformat\n"); videoStream=-1; videoStream = av_find_best_stream(m_FormatCtx,AVMEDIA_TYPE_VIDEO,-1,-1,&m_avCodec,0); // Get a pointer to the codec context for the video stream m_CodecCtx=m_FormatCtx->streams[videoStream]->codec; m_avCodec=avcodec_find_decoder(m_CodecCtx->codec_id); if(m_avCodec==NULL) return false; // Codec not found // Open codec if(avcodec_open2(m_CodecCtx, m_avCodec,NULL)<0) return false; // Could not open codec // Allocate video frame m_avFrame=avcodec_alloc_frame(); // Allocate an AVFrame structure m_avFrameRGB=avcodec_alloc_frame(); if(m_avFrameRGB==NULL) return false; av_log(NULL, AV_LOG_INFO, "Width %i | Height %i \n",m_CodecCtx->width,m_CodecCtx->height); // Determine required buffer size and allocate buffer //m_numBytes=avpicture_get_size(PIX_FMT_RGB24, m_CodecCtx->width,m_CodecCtx->height); m_numBytes=avpicture_get_size(PIX_FMT_YUYV422, m_CodecCtx->width,m_CodecCtx->height); m_buffer=new uint8_t[m_numBytes]; // Assign appropriate parts of buffer to image planes in m_avFrameRGB avpicture_fill((AVPicture *)m_avFrameRGB, m_buffer, PIX_FMT_YUYV422, m_CodecCtx->width, m_CodecCtx->height); ..... snip -------------- next part -------------- An HTML attachment was scrubbed... URL: From nhanndt_87 at yahoo.com Fri Mar 8 04:00:02 2013 From: nhanndt_87 at yahoo.com (thanh nhan thanh nhan) Date: Thu, 7 Mar 2013 19:00:02 -0800 (PST) Subject: [Libav-user] Clip audio file with specified time range! Message-ID: <1362711602.45938.YahooMailNeo@web121706.mail.ne1.yahoo.com> Hi all, Does anyone know how to clip audio file with specified time range in second ( like int star_time, end_time) using ffmpeg functions. My current status is I am running a while?loop: while (av_read_frame(formatContext, &packet) == 0) ?{ ??{ ???int frameFinished = 0; ???avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet); ???// do somthing ? } } I found some fields in AVStream and AVFrame that introduce time-base, time-stamp... but i don't clearly what they mean. How can i translate those kinds of time in to real time unit( second or milisecond) Thanks in advance! From ehouitte at yacast.fr Fri Mar 8 08:14:44 2013 From: ehouitte at yacast.fr (Emmanuel HOUITTE) Date: Fri, 8 Mar 2013 08:14:44 +0100 Subject: [Libav-user] Clip audio file with specified time range! In-Reply-To: <1362711602.45938.YahooMailNeo@web121706.mail.ne1.yahoo.com> References: <1362711602.45938.YahooMailNeo@web121706.mail.ne1.yahoo.com> Message-ID: <7C0B03E609BB5B4CBF496040DCF482FB011CAD7E2C25@MAILBOX01.yacast.fr> Hi, You need to use AVStream:time_base double dNbSeconds = (double)(pAVPacket->dts - pAVStream->first_dts) * pAVStream->time_base.num / pAVStream->time_base.den; -----Message d'origine----- De?: libav-user-bounces at ffmpeg.org [mailto:libav-user-bounces at ffmpeg.org] De la part de thanh nhan thanh nhan Envoy??: vendredi 8 mars 2013 04:00 ??: libav-user at ffmpeg.org Objet?: [Libav-user] Clip audio file with specified time range! Hi all, Does anyone know how to clip audio file with specified time range in second ( like int star_time, end_time) using ffmpeg functions. My current status is I am running a while?loop: while (av_read_frame(formatContext, &packet) == 0) ?{ ??{ ???int frameFinished = 0; ???avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet); ???// do somthing ? } } I found some fields in AVStream and AVFrame that introduce time-base, time-stamp... but i don't clearly what they mean. How can i translate those kinds of time in to real time unit( second or milisecond) Thanks in advance! _______________________________________________ Libav-user mailing list Libav-user at ffmpeg.org http://ffmpeg.org/mailman/listinfo/libav-user From haridassagarn at tataelxsi.co.in Fri Mar 8 13:17:52 2013 From: haridassagarn at tataelxsi.co.in (Haridas Sagar N) Date: Fri, 8 Mar 2013 12:17:52 +0000 Subject: [Libav-user] (no subject) Message-ID: <5773415B74E79546B5E12D008F2014B11F3AD486@SIXPRD0410MB396.apcprd04.prod.outlook.com> I will be getting live streams from network, i will be depacketizing it as soon as frames are available and store it in buffer for muxing into mp4 file,my problem is buffer size is limited(may be capable of holding 4 frames) so every time i read from buffer and write it to avformatcontext using aviocontext, again i need to get the updated buffer data from depacketizer output and update avformatcontext using aviocontext, i dont want to use any buffer of larger size(say more than four frames) ... can anyone suggest how to update avformatcontext using aviocontext / any other suggestions are welcome.. Thanks in advance Regards Haridas Sagar N Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From haridassagarn at tataelxsi.co.in Fri Mar 8 13:19:02 2013 From: haridassagarn at tataelxsi.co.in (Haridas Sagar N) Date: Fri, 8 Mar 2013 12:19:02 +0000 Subject: [Libav-user] Problem:updating avformatcontext using aviocontext In-Reply-To: <5773415B74E79546B5E12D008F2014B11F3AD486@SIXPRD0410MB396.apcprd04.prod.outlook.com> References: <5773415B74E79546B5E12D008F2014B11F3AD486@SIXPRD0410MB396.apcprd04.prod.outlook.com> Message-ID: <5773415B74E79546B5E12D008F2014B11F3AE496@SIXPRD0410MB396.apcprd04.prod.outlook.com> I will be getting live streams from network, i will be depacketizing it as soon as frames are available and store it in buffer for muxing into mp4 file,my problem is buffer size is limited(may be capable of holding 4 frames) so every time i read from buffer and write it to avformatcontext using aviocontext, again i need to get the updated buffer data from depacketizer output and update avformatcontext using aviocontext, i dont want to use any buffer of larger size(say more than four frames) ... can anyone suggest how to update avformatcontext using aviocontext / any other suggestions are welcome.. Thanks in advance Regards Haridas Sagar N Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From efyxps at gmail.com Sat Mar 9 10:38:49 2013 From: efyxps at gmail.com (Nicolas) Date: Sat, 9 Mar 2013 10:38:49 +0100 Subject: [Libav-user] Using custom asynchronous I/O with avio_alloc_context Message-ID: Hi, I am using libav with custom I/O (with avio_alloc_context()) for reading various stream. In my project, all my I/O are asynchronous but ffmpeg callbacks work in a synchronous way. So right now, what I am doing is to use an internal buffer 4 times bigger than my avio buffer and each time my read callback is called I copy a part of my buffer and get new data asynchronously, this works well for reading, but when I seek my media I don't have (yet) data in my internal buffer. In such case what should I return in my read callback? Also, there is a flag AVIO_FLAG_NONBLOCKING that I could pass to avio_alloc_context but i couldn't figure how to use it (what should I return in my read callback?) is there any way to use custom I/O in an asynchronous fashion? Thanks. Nic. -------------- next part -------------- An HTML attachment was scrubbed... URL: From danchami at hotmail.com Sat Mar 9 14:50:01 2013 From: danchami at hotmail.com (Danny Chami) Date: Sun, 10 Mar 2013 00:50:01 +1100 Subject: [Libav-user] Packing Raw AAC in MPEGTS & Extradata Message-ID: Hi All, I am writing an app that takes raw AAC packets generated by a HW encoder I have no control over, and I am inserting these directly into an AVPacket (copying directly to Pkt.data). however when I pass to the mpegts mux, I am getting this error " [mpegts @ 0x91b3600] AAC bitstream not in ADTS format and extradata missing" It seems, I have to set the extra data field as the stream is not in ADTS format.. any idea how / what I need to set the extradata to? Done quite a bit of searching with no luck. I am using 44,100 sample rate, 64000 bit rate and 2 channels. I also need this for 1 audio channel. Any pointers would be much appreciated. Thanks,Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: From danchami at hotmail.com Sat Mar 9 14:52:15 2013 From: danchami at hotmail.com (Danny Chami) Date: Sun, 10 Mar 2013 00:52:15 +1100 Subject: [Libav-user] Packing Raw AAC in MPEGTS & Extradata Message-ID: Hi All, I am writing an app that takes raw AAC packets generated by a HW encoder I have no control over, and I am inserting these directly into an AVPacket (copying directly to Pkt.data). however when I pass to the mpegts mux, I am getting this error " [mpegts @ 0x91b3600] AAC bitstream not in ADTS format and extradata missing" It seems, I have to set the extra data field as the stream is not in ADTS format.. any idea how / what I need to set the extradata to? Done quite a bit of searching with no luck. I am using 44,100 sample rate, 64000 bit rate and 2 channels. I also need this for 1 audio channel. Any pointers would be much appreciated. Thanks,Danny -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Sat Mar 9 16:58:47 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Sat, 9 Mar 2013 15:58:47 +0000 (UTC) Subject: [Libav-user] Packing Raw AAC in MPEGTS & Extradata References: Message-ID: Danny Chami writes: > It seems, I have to set the extra data field as the stream > is not in ADTS format.. any idea how / what I need to set > the extradata to? I wonder if adts_decode_extradata() in libavformat/adtsenc.c would help you? Carl Eugen From likai2 at lenovo.com Fri Mar 8 04:52:19 2013 From: likai2 at lenovo.com (Kai2 Li) Date: Fri, 8 Mar 2013 03:52:19 +0000 Subject: [Libav-user] Questions about ffmpeg's image conversion Message-ID: <8972D24A7447EC40AE4CB410372ACE2749E62E@CNMAILMBX03.lenovo.com> Hi, ffmpeg development team: I'm a software developer of Lenovo Inc. I'm using ffmpeg in my video project for TS stream playing. While playing TS, cpu usage up to 90%, which in the image conversion process (YUV to RGB), cpu usageup to 25% -30%. In my project, ffmpeg's image conversion call function yuv2rgb_c_32 in C file /libffmpeg/libswscale/yuv2rgb.c Here is source code: YUV2RGBFUNC(yuv2rgb_c_32, uint32_t, 0) LOADCHROMA(0); PUTRGB(dst_1, py_1, 0); PUTRGB(dst_2, py_2, 0); LOADCHROMA(1); PUTRGB(dst_2, py_2, 1); PUTRGB(dst_1, py_1, 1); LOADCHROMA(2); PUTRGB(dst_1, py_1, 2); PUTRGB(dst_2, py_2, 2); LOADCHROMA(3); PUTRGB(dst_2, py_2, 3); PUTRGB(dst_1, py_1, 3); ENDYUV2RGBLINE(8) LOADCHROMA(0); PUTRGB(dst_1, py_1, 0); PUTRGB(dst_2, py_2, 0); LOADCHROMA(1); PUTRGB(dst_2, py_2, 1); PUTRGB(dst_1, py_1, 1); ENDYUV2RGBFUNC() With your please, I would like to ask some questions. a) Why the function writing by macro? b) Whether the function can be optimized? c) Has there some updates or new version? d) Will ffmpeg have ARM platform specialized version or patch? My hardware is Sumsang Pad GT-P5110 (1G dural core, 1G RAM, 32G SD card), ffmpeg version is 0.11.1 I'm glad and eager for your reply, thank you very much. Best regards Regards, ?? Li Kai System Innovation Lab Lenovo, R&T No.6 Shangdi West Road Haidian District, Beijing likai2 at lenovo.com (TEL) +86 10-58861550 (FAX) +86 10-58863357 [??: ??: ??: ??: ??: cid:image003.png at 01CD86C6.364B3F00] www.Lenovo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 7119 bytes Desc: image001.png URL: From cehoyos at ag.or.at Sun Mar 10 09:48:58 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Sun, 10 Mar 2013 08:48:58 +0000 (UTC) Subject: [Libav-user] Questions about ffmpeg's image conversion References: <8972D24A7447EC40AE4CB410372ACE2749E62E@CNMAILMBX03.lenovo.com> Message-ID: Kai2 Li writes: > In my project, ffmpeg's image conversion call function > yuv2rgb_c_32 in C file > /libffmpeg/libswscale/yuv2rgb.c (This should not happen on x86 where optimized functions exist.) > With your please, I would like to ask some questions. > > a) Why the function writing by macro?? > b) Whether the function can be optimized? > > c) Has there some updates or new version? > > d) Will ffmpeg have ARM platform specialized version or patch? > My hardware is Sumsang Pad GT-P5110 (1G dural core, > 1G RAM, 32G SD card), ffmpeg version is 0.11.1 Is this ARM hardware? If yes, nobody added arm optimizations for libswscale so far, look into the subdirectories in libswscale to see some examples (x86, ppc, sparc and bfin, consider looking at sparc if you really only need yuv2rgb32, x86 contains a myriad of optimizations more). If you plan to work on adding optimizations, please update to current git head and please read http://ffmpeg.org/developer.html (And if you plan to implement this, ffmpeg-devel is of course the right mailing list for implementation- related question, it isn't clear to me though if you want to do that.) Carl Eugen From nicolas.george at normalesup.org Sun Mar 10 19:44:33 2013 From: nicolas.george at normalesup.org (Nicolas George) Date: Sun, 10 Mar 2013 19:44:33 +0100 Subject: [Libav-user] Using custom asynchronous I/O with avio_alloc_context In-Reply-To: References: Message-ID: <20130310184433.GA12119@phare.normalesup.org> Le nonidi 19 vent?se, an CCXXI, Nicolas a ?crit?: > I am using libav with custom I/O (with avio_alloc_context()) for reading > various stream. > > In my project, all my I/O are asynchronous but ffmpeg callbacks work in a > synchronous way. So right now, what I am doing is to use an internal buffer > 4 times bigger than my avio buffer and each time my read callback is called > I copy a part of my buffer and get new data asynchronously, this works well > for reading, but when I seek my media I don't have (yet) data in my > internal buffer. In such case what should I return in my read callback? > > Also, there is a flag AVIO_FLAG_NONBLOCKING that I could pass to > avio_alloc_context but i couldn't figure how to use it (what should I > return in my read callback?) > > is there any way to use custom I/O in an asynchronous fashion? Unfortunately, no, there is currently no way of working with asynchronous and non-blocking I/O. Some of the network protocols can be set to work non-blocking, but not all, and the demuxers themselves can not work in non-blocking mode. The usual solution for that is to use threads. You could, for example, run the demuxer in a separate thread. When the demuxer calls the read function in your custom I/O context, the thread becomes blocked waiting for data from the main thread, and when it has demuxed a packet, it can wake the main thread using some kind of signalfd/self-pipe. That way, the demuxing thread do not force you to abandon your async model (using a few threads has a tendency to force you to use threads everywhere). Also note that since you are using the thread not for concurrency but only to keep the local context of the demuxer, you do not have to use "real" threads, you can consider using, for examples, GNU Pth, or libpcl. Regards, -- Nicolas George -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From efyxps at gmail.com Mon Mar 11 14:59:32 2013 From: efyxps at gmail.com (Nicolas) Date: Mon, 11 Mar 2013 14:59:32 +0100 Subject: [Libav-user] Using custom asynchronous I/O with avio_alloc_context In-Reply-To: <20130310184433.GA12119@phare.normalesup.org> References: <20130310184433.GA12119@phare.normalesup.org> Message-ID: Hi, Thanks for these useful informations. It works great, I used libpcl. Cheers. On Sun, Mar 10, 2013 at 7:44 PM, Nicolas George < nicolas.george at normalesup.org> wrote: > Le nonidi 19 vent?se, an CCXXI, Nicolas a ?crit : > > I am using libav with custom I/O (with avio_alloc_context()) for reading > > various stream. > > > > In my project, all my I/O are asynchronous but ffmpeg callbacks work in a > > synchronous way. So right now, what I am doing is to use an internal > buffer > > 4 times bigger than my avio buffer and each time my read callback is > called > > I copy a part of my buffer and get new data asynchronously, this works > well > > for reading, but when I seek my media I don't have (yet) data in my > > internal buffer. In such case what should I return in my read callback? > > > > Also, there is a flag AVIO_FLAG_NONBLOCKING that I could pass to > > avio_alloc_context but i couldn't figure how to use it (what should I > > return in my read callback?) > > > > is there any way to use custom I/O in an asynchronous fashion? > > Unfortunately, no, there is currently no way of working with asynchronous > and non-blocking I/O. Some of the network protocols can be set to work > non-blocking, but not all, and the demuxers themselves can not work in > non-blocking mode. > > The usual solution for that is to use threads. You could, for example, run > the demuxer in a separate thread. When the demuxer calls the read function > in your custom I/O context, the thread becomes blocked waiting for data > from > the main thread, and when it has demuxed a packet, it can wake the main > thread using some kind of signalfd/self-pipe. That way, the demuxing thread > do not force you to abandon your async model (using a few threads has a > tendency to force you to use threads everywhere). > > Also note that since you are using the thread not for concurrency but only > to keep the local context of the demuxer, you do not have to use "real" > threads, you can consider using, for examples, GNU Pth, or libpcl. > > Regards, > > -- > Nicolas George > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.12 (GNU/Linux) > > iEYEARECAAYFAlE81JAACgkQsGPZlzblTJMtkACeLJzEZTJoWgQsPoMRjEHrKCuZ > aJMAoJaX+tNbEtsk0lmfasNLFnVD1BQd > =qnqx > -----END PGP SIGNATURE----- > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From julian.herrera at tvgenius.net Mon Mar 11 18:12:15 2013 From: julian.herrera at tvgenius.net (Julian Herrera) Date: Mon, 11 Mar 2013 17:12:15 +0000 Subject: [Libav-user] Video frames cannot be decoded after seek operation Message-ID: <513E106F.5000508@tvgenius.net> Hello all, I've developed a video player based on libav and I've come across an issue with seeking operations. I've coded the player pretty much like the tutorial available on http://dranger.com/ffmpeg What happens is that after an av_seek_frame() call and the corresponding avcodec_flush_buffers(), the next ~10 frames cannot be decoded by avcodec_decode_video2(). In fact the parameter "int *got_picture_ptr" returns false in those cases. This introduces an ugly slow down on the FPS rate until new video frames replace the undecoded ones in the queue. If I don't call avcodec_flush_buffers() after the seek operation, all frames are decoded but visual artifacts appear. The video is an MPEG-TS file, codecs are mpeg2video and mp2. Can you shed some light on what I should look for? Regards, Julian From bjoern.drabeck at gmail.com Tue Mar 12 10:47:56 2013 From: bjoern.drabeck at gmail.com (Bjoern Drabeck) Date: Tue, 12 Mar 2013 17:47:56 +0800 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? Message-ID: Hi, as described in the ffmpeg documentation http://ffmpeg.org/platform.html#Windows under 4.2, I have set up MinGW/MSys to compile using the MSVC toolchain and c99-to-c89 tool. I have got that to build, however compared to builds from the zeranoe site (and also builds I have asked a friend of mine to make for me using mingw with gcc), I always end up with seeking problems. In most files everything seems to work fine, however when I have larger MKV files (for example I got one 15 GB movie file), the seeking can take several minutes (depending how far into the file I seek). The same code works fine however when I used one of the zeranoe builds (ie seeking near instant in the same file) The main difference seems to be that I use the msvc toolchain. I used this configuration (which contains the essentails I need: LGPL, v3, dxva2 support, shared dlls): --toolchain=msvc --enable-hwaccels --enable-dxva2 --disable-debug --enable-shared --disable-static --enable-version3 Before that I have also tried this: --toolchain=msvc --arch=x86_32 --target-os=win32 --enable-memalign-hack --disable-static --enable-shared --enable-runtime-cpudetect --disable-encoders --disable-muxers --enable-hwaccels --disable-w32threads --enable-version3 --enable-dxva2 --enable-hwaccel=h264_dxva2 --disable-debug" (and also some variations between the above two.. never with any luck) Has anyone experienced similar problems? Or any hint on how I can resolve that problem? -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Tue Mar 12 11:19:17 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 12 Mar 2013 10:19:17 +0000 (UTC) Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? References: Message-ID: Bjoern Drabeck writes: > I have got that to build, however compared to builds > from the zeranoe site (and also builds I have asked a > friend of mine to make for me using mingw with gcc), > I always end up with seeking problems. This is surprising. Are you sure that you are testing the same versions? Did you try to disable optimizations? > In most files everything seems to work fine, however > when I have larger MKV files (for example I got one > 15 GB movie file) Does mkvalidator report the following? "Unnecessary secondary SeekHead was found at " (followed by a very large number) This would indicate ticket #2263, we don't even know how to produce such a file... [...] > I used this configuration (which contains the essentails > I need: LGPL, v3, dxva2 support, shared dlls): Completely unrelated: Could you explain why you "need" v3 ? Carl Eugen From danchami at hotmail.com Tue Mar 12 11:21:07 2013 From: danchami at hotmail.com (Danny Chami) Date: Tue, 12 Mar 2013 14:21:07 +0400 Subject: [Libav-user] Packing Raw AAC in MPEGTS & Extradata In-Reply-To: References: Message-ID: Thanks Carl, that was very helpful pointer. Doing the code crawl got me going. Thanks Danny On Mar 9, 2013, at 8:00 PM, "Carl Eugen Hoyos" wrote: > Danny Chami writes: > >> It seems, I have to set the extra data field as the stream >> is not in ADTS format.. any idea how / what I need to set >> the extradata to? > > I wonder if adts_decode_extradata() in libavformat/adtsenc.c > would help you? > > Carl Eugen > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user From bjoern.drabeck at gmail.com Tue Mar 12 11:52:44 2013 From: bjoern.drabeck at gmail.com (Bjoern Drabeck) Date: Tue, 12 Mar 2013 18:52:44 +0800 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: Message-ID: > > I have got that to build, however compared to builds > > from the zeranoe site (and also builds I have asked a > > friend of mine to make for me using mingw with gcc), > > I always end up with seeking problems. > > This is surprising. > Are you sure that you are testing the same versions? > I have downloaded the zeranoe build marked as 1.1.3 and I also got http://ffmpeg.org/releases/ffmpeg-1.1.3.tar.bz2 and built that myself.. so I would say it's the same version. However I got the same problem with previous versions too (tried 1.0.1, and 1.1 for example). > Did you try to disable optimizations? > > For some reason I get build errors as soon as I use --disable-optimizations: LD libavutil/avutil-52.dll Creating library libavutil/avutil.lib and object libavutil/avutil.exp cpu.o : error LNK2019: unresolved external symbol _ff_get_cpu_flags_ppc referenced in function _av_get_cpu_flags cpu.o : error LNK2019: unresolved external symbol _ff_get_cpu_flags_arm referenced in function _av_get_cpu_flags libavutil/avutil-52.dll : fatal error LNK1120: 2 unresolved externals make: *** [libavutil/avutil-52.dll] Error 1 If I don't disable optimizations I don't get that and it builds fine... but no idea about that (I have never really looked into the ffmpeg code except for the public headers) > > In most files everything seems to work fine, however > > when I have larger MKV files (for example I got one > > 15 GB movie file) > > Does mkvalidator report the following? > "Unnecessary secondary SeekHead was found at " > (followed by a very large number) > This would indicate ticket #2263, we don't even > know how to produce such a file... > > No, what mkvalidator says is: mkvalidator 0.4.2: the file appears to be valid Track #1 V_MPEG4/ISO/AVC 423468 bits/s Track #2 A_DTS -216696 bits/s Track #3 S_TEXT/UTF8 1 bits/s Track #4 S_TEXT/UTF8 36 bits/s file created with mkv2rls v1.3 (date: 2010 aug 28) / x264.exe I also get a lot of: WRN0C0: First Block for video track #1 in Cluster at 16348469372 is not a keyframe > [...] > > > I used this configuration (which contains the essentails > > I need: LGPL, v3, dxva2 support, shared dlls): > > Completely unrelated: Could you explain why you "need" v3 ? > > No, I don't need it, sorry, copy-paste thing from somewhere else oops... I do need LGPL and dxva2 though... > Carl Eugen > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rjvbertin at gmail.com Tue Mar 12 12:02:01 2013 From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=) Date: Tue, 12 Mar 2013 12:02:01 +0100 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: Message-ID: On Mar 12, 2013, at 11:52, Bjoern Drabeck wrote: > > > I have got that to build, however compared to builds > > from the zeranoe site (and also builds I have asked a > > friend of mine to make for me using mingw with gcc), > > I always end up with seeking problems. > Just guessing here, but it does not seem impossible that the extra steps required to build using MSVC introduce some sort of glue code, and that might include seek-related code that is particularly non-optimal. Is there any reason to build ffmpeg with MSVC, rather than using mingw with msys or a mingw cross-compiler on a simple linux VM? R From bjoern.drabeck at gmail.com Tue Mar 12 12:10:14 2013 From: bjoern.drabeck at gmail.com (Bjoern Drabeck) Date: Tue, 12 Mar 2013 19:10:14 +0800 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: Message-ID: > > > > I have got that to build, however compared to builds > > > from the zeranoe site (and also builds I have asked a > > > friend of mine to make for me using mingw with gcc), > > > I always end up with seeking problems. > > > > Just guessing here, but it does not seem impossible that the extra steps > required to build using MSVC introduce some sort of glue code, and that > might include seek-related code that is particularly non-optimal. > > Is there any reason to build ffmpeg with MSVC, rather than using mingw > with msys or a mingw cross-compiler on a simple linux VM? > > the main reason was basically just for convenience (build it all from just one place; also linux environments are new to me).. seemed to work fine at first, and was quite quick to set up... But right now am just thinking about setting up Ubuntu to see if I can get it to build correctly like that.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.orr at scala.com Tue Mar 12 15:54:17 2013 From: john.orr at scala.com (John Orr) Date: Tue, 12 Mar 2013 10:54:17 -0400 Subject: [Libav-user] Is there no LGPL deinterlacer left In-Reply-To: References: Message-ID: <513F4199.2050000@scala.com> The other day I noticed a commit that "lavc: Deprecate the deinterlace functions in libavcodec": http://git.videolan.org/?p=ffmpeg.git;a=commit;h=54b298fe5650c124c29a8283cfd05024ac409d3a AFAIK, that's the only LGPL'ed deinterlace code and the deinterlace code in lavf is GPL. Does anybody know if that is correct, or perhaps there is some other LGPL deinterlace code in the ffmpeg code base I'm not aware of? --Johno From cehoyos at ag.or.at Tue Mar 12 15:56:21 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 12 Mar 2013 14:56:21 +0000 (UTC) Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? References: Message-ID: Bjoern Drabeck writes: > > > I have got that to build, however compared to builds > > > from the zeranoe site (and also builds I have asked a > > > friend of mine to make for me using mingw with gcc), > > > I always end up with seeking problems. > > > > This is surprising. > > Are you sure that you are testing the same versions? > > I have downloaded the zeranoe build marked as 1.1.3 and I > also got?http://ffmpeg.org/releases/ffmpeg-1.1.3.tar.bz2 > and built that myself.. so I would say it's the same version. Could you also test current git head? (Or, since that is atm unstable origin/release/1.2) If it does not work, can you test if ffmpeg (the application) allows to reproduce the problem? If not, a test-case will probably be required... > > Did you try to disable optimizations? > > For some reason I get build errors as soon as I use? > --disable-optimizations: > > LD libavutil/avutil-52.dll > ? ?Creating library libavutil/avutil.lib and object libavutil/avutil.exp > cpu.o : error LNK2019: unresolved external symbol _ff_get_cpu_flags_ppc > referenced in function _av_get_cpu_flags Dead-code-elimination is always required to build FFmpeg, try --extra-cflags=-O1 (or actually the msvc equivalent). (Please set your mailer to text-only if possible.) Carl Eugen From cehoyos at ag.or.at Tue Mar 12 15:58:50 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 12 Mar 2013 14:58:50 +0000 (UTC) Subject: [Libav-user] Is there no LGPL deinterlacer left References: <513F4199.2050000@scala.com> Message-ID: John Orr writes: > The other day I noticed a commit that "lavc: Deprecate > the deinterlace functions in libavcodec": Deprecate != remove ;-) There will be a better LGPL deinterlacer before the removal. Carl Eugen From krueger at lesspain.de Tue Mar 12 16:02:13 2013 From: krueger at lesspain.de (=?UTF-8?Q?Robert_Kr=C3=BCger?=) Date: Tue, 12 Mar 2013 16:02:13 +0100 Subject: [Libav-user] Is there no LGPL deinterlacer left In-Reply-To: <513F4199.2050000@scala.com> References: <513F4199.2050000@scala.com> Message-ID: On Tue, Mar 12, 2013 at 3:54 PM, John Orr wrote: > > The other day I noticed a commit that "lavc: Deprecate the deinterlace > functions in libavcodec": > > http://git.videolan.org/?p=ffmpeg.git;a=commit;h=54b298fe5650c124c29a8283cfd05024ac409d3a > > > AFAIK, that's the only LGPL'ed deinterlace code and the deinterlace code in > lavf is GPL. > > Does anybody know if that is correct, or perhaps there is some other LGPL > deinterlace code in the ffmpeg code base I'm not aware of? > > --Johno AFAIK you are correct. We (and probably a number of others) are facing the same problem. However, there is an ongoing effort to relicense Yadif to LGPL so that can serve as a (much better) replacement for people requiring LGPL. However, if this effort is not successful, you will not have many options other than maintaining an old copy for a while while finding a replacement somewhere. If you want to use Yadif in a commercial product then you might want to consider offering some money for the relicensing as two other companies have offered and some more have hinted that they would do so. Regards, Robert From krueger at lesspain.de Tue Mar 12 16:03:11 2013 From: krueger at lesspain.de (=?UTF-8?Q?Robert_Kr=C3=BCger?=) Date: Tue, 12 Mar 2013 16:03:11 +0100 Subject: [Libav-user] Is there no LGPL deinterlacer left In-Reply-To: References: <513F4199.2050000@scala.com> Message-ID: On Tue, Mar 12, 2013 at 3:58 PM, Carl Eugen Hoyos wrote: > John Orr writes: > >> The other day I noticed a commit that "lavc: Deprecate >> the deinterlace functions in libavcodec": > > Deprecate != remove ;-) > > There will be a better LGPL deinterlacer before the removal. > > Carl Eugen > You were a few seconds faster :-). Good to hear that! From cehoyos at ag.or.at Tue Mar 12 16:04:15 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 12 Mar 2013 15:04:15 +0000 (UTC) Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? References: Message-ID: Bjoern Drabeck writes: > > Just guessing here, but it does not seem impossible > > that the extra steps required to build using MSVC > > introduce some sort of glue code, I don't think there is any glue code. (Remember the gcc bugs you recently found, a but in msvc - or in the FFmpeg code - is not less likely.) > > and that might include seek-related code that is > > particularly non-optimal. It could of course be related to the glue code mingw uses... > > Is there any reason to build ffmpeg with MSVC, rather > > than using mingw with msys or a mingw cross-compiler > > on a simple linux VM? Apart from the fact that I still cannot understand why building with a cross-compiler in a VM can be easier than doing a native mingw build: For years, this was one of the most often requested "features", probably to allow debugging within msvc. > the main reason was basically just for convenience > (build it all from just one place; also linux environments > are new to me).. seemed to work fine at first, and was > quite quick to set up... Thank you for posting this, I wondered how difficult it is! > But right now am just thinking about setting up Ubuntu to > see if I can get it to build correctly like that..? I believe you had to install mingw to use msvc, the gcc compiler you installed should work fine, no reason to setup Ubuntu to compile for win32. Carl Eugen From john.orr at scala.com Tue Mar 12 16:04:07 2013 From: john.orr at scala.com (John Orr) Date: Tue, 12 Mar 2013 11:04:07 -0400 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: Message-ID: <513F43E7.9010400@scala.com> On 3/12/2013 5:47 AM, Bjoern Drabeck wrote: > Hi, > > as described in the ffmpeg documentation > http://ffmpeg.org/platform.html#Windows under 4.2, I have set up > MinGW/MSys to compile using the MSVC toolchain and c99-to-c89 tool. I > have got that to build, however compared to builds from the zeranoe > site (and also builds I have asked a friend of mine to make for me > using mingw with gcc), I always end up with seeking problems. > I think the MSVC builds use 32-bit fstat and lseek functions, so they don't play or seek correctly with files > 2GB. I modified file_seek in libavformat/file.c locally like this: /* XXX: use llseek */ static int64_t file_seek(URLContext *h, int64_t pos, int whence) { FileContext *c = h->priv_data; int64_t ret; if (whence == AVSEEK_SIZE) { #ifndef _MSC_VER struct stat st; ret = fstat(c->fd, &st); #else struct _stat64 st; ret = _fstati64( c->fd, &st ); #endif return ret < 0 ? AVERROR(errno) : (S_ISFIFO(st.st_mode) ? 0 : st.st_size); } #ifndef _MSC_VER ret = lseek(c->fd, pos, whence); #else ret = _lseeki64(c->fd, pos, whence); #endif return ret < 0 ? AVERROR(errno) : ret; } --Johno From cehoyos at ag.or.at Tue Mar 12 16:10:01 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 12 Mar 2013 15:10:01 +0000 (UTC) Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? References: <513F43E7.9010400@scala.com> Message-ID: John Orr writes: > if (whence == AVSEEK_SIZE) { > #ifndef _MSC_VER > struct stat st; > > ret = fstat(c->fd, &st); > #else > struct _stat64 st; > ret = _fstati64( c->fd, &st ); > #endif > return ret < 0 ? AVERROR(errno) : (S_ISFIFO(st.st_mode) ? 0 : > st.st_size); > } > #ifndef _MSC_VER > ret = lseek(c->fd, pos, whence); > #else > ret = _lseeki64(c->fd, pos, whence); > #endif Perhaps you could (fix the whitespace and) send a patch to ffmpeg-devel or set up a git clone to allow merging? Thank you for the solution! Carl Eugen From h.leppkes at gmail.com Tue Mar 12 16:14:37 2013 From: h.leppkes at gmail.com (Hendrik Leppkes) Date: Tue, 12 Mar 2013 16:14:37 +0100 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: <513F43E7.9010400@scala.com> Message-ID: On Tue, Mar 12, 2013 at 4:10 PM, Carl Eugen Hoyos wrote: > John Orr writes: > >> if (whence == AVSEEK_SIZE) { >> #ifndef _MSC_VER >> struct stat st; >> >> ret = fstat(c->fd, &st); >> #else >> struct _stat64 st; >> ret = _fstati64( c->fd, &st ); >> #endif >> return ret < 0 ? AVERROR(errno) : (S_ISFIFO(st.st_mode) ? 0 : >> st.st_size); >> } >> #ifndef _MSC_VER >> ret = lseek(c->fd, pos, whence); >> #else >> ret = _lseeki64(c->fd, pos, whence); >> #endif > > Perhaps you could (fix the whitespace and) send a patch > to ffmpeg-devel or set up a git clone to allow merging? > If you send a patch, then first look at libavformat/os_support.h, it already has defines for mingw to map these functions, and should probably be enhanced there, instead of cluttering file.c with #ifdefs. From john.orr at scala.com Tue Mar 12 16:18:52 2013 From: john.orr at scala.com (John Orr) Date: Tue, 12 Mar 2013 11:18:52 -0400 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: Message-ID: <513F475C.1010301@scala.com> On 3/12/2013 6:52 AM, Bjoern Drabeck wrote: > > > I have got that to build, however compared to builds > > from the zeranoe site (and also builds I have asked a > > friend of mine to make for me using mingw with gcc), > > I always end up with seeking problems. > > This is surprising. > Are you sure that you are testing the same versions? > > > I have downloaded the zeranoe build marked as 1.1.3 and I also got > http://ffmpeg.org/releases/ffmpeg-1.1.3.tar.bz2 and built that > myself.. so I would say it's the same version. However I got the same > problem with previous versions too (tried 1.0.1, and 1.1 for example). > > Did you try to disable optimizations? > > For some reason I get build errors as soon as I > use --disable-optimizations: > > LDlibavutil/avutil-52.dll > Creating library libavutil/avutil.lib and object libavutil/avutil.exp > cpu.o : error LNK2019: unresolved external symbol > _ff_get_cpu_flags_ppc referenced in function _av_get_cpu_flags > cpu.o : error LNK2019: unresolved external symbol > _ff_get_cpu_flags_arm referenced in function _av_get_cpu_flags > libavutil/avutil-52.dll : fatal error LNK1120: 2 unresolved externals > make: *** [libavutil/avutil-52.dll] Error 1 > > If I don't disable optimizations I don't get that and it builds > fine... but no idea about that (I have never really looked into the > ffmpeg code except for the public headers) > Parts of ffmpeg source code assume the compiler will remove the body of a conditional if the condition is always false, for example from libavutil.c/av_get_cpu_flags(): int av_get_cpu_flags(void) { if (checked) return flags; if (ARCH_ARM) flags = ff_get_cpu_flags_arm(); if (ARCH_PPC) flags = ff_get_cpu_flags_ppc(); if (ARCH_X86) flags = ff_get_cpu_flags_x86(); checked = 1; return flags; } If ARCH_ARM is the constant 0, the code assumes this reference to ff_get_cpu_flags_arm() will disappear. Treats that as an optimization, so if you turn off optimizations, the compiler will generate code to call ff_get_cpu_flags_arm, but that function won't exist if ARCH_ARM is false. To get around that, I've used flags like these to compile a less optimized version for testing purposes: --toolchain=msvc --optflags='-Zi -Og -Oy- -arch:SSE2' --extra-cflags='-Gy -MDd' --extra-ldflags='-OPT:REF -DEBUG -VERBOSE' --enable-shared I've been using VC10. The thing that's handy for me is that it generates .pdb files (via the -Zi flag) and I can mostly step through code with the VC10 debugger. I had to modify the config.mak to get rid of some conflicting flags, running the configuration script would add -Z7 (which contradicts -Zi). It also would add -Oy which is the opposite of -Oy-, so I manually removed it. --Johno -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Tue Mar 12 16:22:46 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 12 Mar 2013 15:22:46 +0000 (UTC) Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? References: Message-ID: Carl Eugen Hoyos writes: > > > Just guessing here, but it does not seem impossible > > > that the extra steps required to build using MSVC > > > introduce some sort of glue code, > > I don't think there is any glue code. > (Remember the gcc bugs you recently found, a but > in msvc - or in the FFmpeg code - is not less likely.) But that is not the case here. > > > and that might include seek-related code that is > > > particularly non-optimal. > > It could of course be related to the glue code mingw > uses... ;-)) See libavformat/os_support.h as explained by Hendrik I suspect a change in line 34 is sufficient. Carl Eugen From rjvbertin at gmail.com Tue Mar 12 16:23:59 2013 From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=) Date: Tue, 12 Mar 2013 16:23:59 +0100 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: Message-ID: <868B08A1-BF90-4FEA-88D4-91E5CB8430E4@gmail.com> On Mar 12, 2013, at 16:04, Carl Eugen Hoyos wrote: > I don't think there is any glue code. But there is a conversion so that MSVC can compile the C 'dialect' ffmpeg uses, right? > (Remember the gcc bugs you recently found, a but ^^^ bug? It's not me who claimed it was a gcc bug, not before I know exactly what goes wrong where (this is about compiling with auto-vectorisation; a few test cases fail because of it). >>> and that might include seek-related code that is >>> particularly non-optimal. > > It could of course be related to the glue code mingw > uses... Yes, but in that case one would expect it in all mingw builds. > Apart from the fact that I still cannot understand why > building with a cross-compiler in a VM can be easier than For me, it's the build environment rather than the compiler. The build utilities evolved in and for unix/linux environments so IMHO everything just works smoother there (and MSYS doesn't help). Of course the balance may tip the other way if you start having to maintain a whole slew of libraries ffmpeg depends on (in case it's impossible to use prebuilt 'native mswin' builds of said libraries). BTW, last time I looked (a while ago), it wasn't that evident to find mingw binaries based on current/recent gcc versions (and capable to compile for 64 bits). With the build script available on zeranoe, it just takes a few hours to build an up to date cross compiler; it will even download and build the required dependencies. > doing a native mingw build: For years, this was one of the > most often requested "features", probably to allow > debugging within msvc. msvc debugging code generated with mingw? If indeed it can do that, it ought to work just as well if the code was generated by a mingw cross-compiler. > > the gcc compiler you installed should work fine, no reason > to setup Ubuntu to compile for win32. I kind of expect this to be available in a pre-configured image. R. From rjvbertin at gmail.com Tue Mar 12 16:30:20 2013 From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=) Date: Tue, 12 Mar 2013 16:30:20 +0100 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: Message-ID: On Mar 12, 2013, at 16:22, Carl Eugen Hoyos wrote: >> It could of course be related to the glue code mingw >> uses... > > ;-)) Joking with yourself or some alter ego? O:-) > See libavformat/os_support.h as explained by Hendrik > I suspect a change in line 34 is sufficient. The fstat explanation sounds likely ... if the OP confirms that his issue indeed exists only with files over 2Gb ! R From john.orr at scala.com Tue Mar 12 16:33:46 2013 From: john.orr at scala.com (John Orr) Date: Tue, 12 Mar 2013 11:33:46 -0400 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: <513F43E7.9010400@scala.com> Message-ID: <513F4ADA.3070805@scala.com> On 3/12/2013 11:14 AM, Hendrik Leppkes wrote: > On Tue, Mar 12, 2013 at 4:10 PM, Carl Eugen Hoyos wrote: >> John Orr writes: >> >>> if (whence == AVSEEK_SIZE) { >>> #ifndef _MSC_VER >>> struct stat st; >>> >>> ret = fstat(c->fd, &st); >>> #else ... >>> Perhaps you could (fix the whitespace and) send a patch >>> to ffmpeg-devel or set up a git clone to allow merging? >>> > If you send a patch, then first look at libavformat/os_support.h, it > already has defines for mingw to map these functions, and should > probably be enhanced there, instead of cluttering file.c with #ifdefs. I'll be glad to try to fix it properly. Bear in mind that I'm not so well versed in making patches. --Johno From h.leppkes at gmail.com Tue Mar 12 16:36:13 2013 From: h.leppkes at gmail.com (Hendrik Leppkes) Date: Tue, 12 Mar 2013 16:36:13 +0100 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: <513F4ADA.3070805@scala.com> References: <513F43E7.9010400@scala.com> <513F4ADA.3070805@scala.com> Message-ID: On Tue, Mar 12, 2013 at 4:33 PM, John Orr wrote: > On 3/12/2013 11:14 AM, Hendrik Leppkes wrote: >> >> On Tue, Mar 12, 2013 at 4:10 PM, Carl Eugen Hoyos >> wrote: >>> >>> John Orr writes: >>> >>>> if (whence == AVSEEK_SIZE) { >>>> #ifndef _MSC_VER >>>> struct stat st; >>>> >>>> ret = fstat(c->fd, &st); >>>> #else > > > ... > > >>>> Perhaps you could (fix the whitespace and) send a patch >>>> to ffmpeg-devel or set up a git clone to allow merging? >>>> >> If you send a patch, then first look at libavformat/os_support.h, it >> already has defines for mingw to map these functions, and should >> probably be enhanced there, instead of cluttering file.c with #ifdefs. > > > I'll be glad to try to fix it properly. Bear in mind that I'm not so well > versed in making patches. > > --Johno > No worries, the fix (in theory) is so trivial that I'll simply send a patch to ffmpeg-devel in a few minutes, if you could test it out to confirm, that would be great. From john.orr at scala.com Tue Mar 12 16:38:18 2013 From: john.orr at scala.com (John Orr) Date: Tue, 12 Mar 2013 11:38:18 -0400 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: <513F475C.1010301@scala.com> References: <513F475C.1010301@scala.com> Message-ID: <513F4BEA.4060603@scala.com> On 3/12/2013 11:18 AM, John Orr wrote: > > If ARCH_ARM is the constant 0, the code assumes this reference to > ff_get_cpu_flags_arm() will disappear. Treats that as an optimization... Oops, seems I left out some important words there. make that: "If ARCH_ARM is the constant 0, the code assumes this reference to ff_get_cpu_flags_arm() will disappear. *The Visual C compiler* treats that as an optimization..." -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.orr at scala.com Tue Mar 12 16:42:07 2013 From: john.orr at scala.com (John Orr) Date: Tue, 12 Mar 2013 11:42:07 -0400 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: <513F43E7.9010400@scala.com> <513F4ADA.3070805@scala.com> Message-ID: <513F4CCF.8090700@scala.com> On 3/12/2013 11:36 AM, Hendrik Leppkes wrote: > On Tue, Mar 12, 2013 at 4:33 PM, John Orr wrote: >> On 3/12/2013 11:14 AM, Hendrik Leppkes wrote: >>> On Tue, Mar 12, 2013 at 4:10 PM, Carl Eugen Hoyos >>> wrote: >>>> John Orr writes: >>>> >>>>> if (whence == AVSEEK_SIZE) { >>>>> #ifndef _MSC_VER >>>>> struct stat st; >>>>> >>>>> ret = fstat(c->fd, &st); >>>>> #else >> >> ... >> >> >>>>> Perhaps you could (fix the whitespace and) send a patch >>>>> to ffmpeg-devel or set up a git clone to allow merging? >>>>> >>> If you send a patch, then first look at libavformat/os_support.h, it >>> already has defines for mingw to map these functions, and should >>> probably be enhanced there, instead of cluttering file.c with #ifdefs. >> >> I'll be glad to try to fix it properly. Bear in mind that I'm not so well >> versed in making patches. >> >> --Johno >> > No worries, the fix (in theory) is so trivial that I'll simply send a > patch to ffmpeg-devel in a few minutes, if you could test it out to > confirm, that would be great. > OK, will do. --Johno From john.orr at scala.com Tue Mar 12 16:45:26 2013 From: john.orr at scala.com (John Orr) Date: Tue, 12 Mar 2013 11:45:26 -0400 Subject: [Libav-user] Is there no LGPL deinterlacer left In-Reply-To: References: <513F4199.2050000@scala.com> Message-ID: <513F4D96.6090700@scala.com> On 3/12/2013 11:02 AM, Robert Kr?ger wrote: > On Tue, Mar 12, 2013 at 3:54 PM, John Orr wrote: >> The other day I noticed a commit that "lavc: Deprecate the deinterlace >> functions in libavcodec": >> >> http://git.videolan.org/?p=ffmpeg.git;a=commit;h=54b298fe5650c124c29a8283cfd05024ac409d3a >> >> >> AFAIK, that's the only LGPL'ed deinterlace code and the deinterlace code in >> lavf is GPL. >> >> Does anybody know if that is correct, or perhaps there is some other LGPL >> deinterlace code in the ffmpeg code base I'm not aware of? >> >> --Johno > AFAIK you are correct. We (and probably a number of others) are facing > the same problem. However, there is an ongoing effort to relicense > Yadif to LGPL so that can serve as a (much better) replacement for > people requiring LGPL. However, if this effort is not successful, you > will not have many options other than maintaining an old copy for a > while while finding a replacement somewhere. > > If you want to use Yadif in a commercial product then you might want > to consider offering some money for the relicensing as two other > companies have offered and some more have hinted that they would do > so. OK, thanks. I'll go shake some trees and see what falls out. --Johno From donmoir at comcast.net Tue Mar 12 19:51:31 2013 From: donmoir at comcast.net (Don Moir) Date: Tue, 12 Mar 2013 14:51:31 -0400 Subject: [Libav-user] Building with MSVC toolchain resulting in seekingproblem? References: Message-ID: >> > Is there any reason to build ffmpeg with MSVC, rather >> > than using mingw with msys or a mingw cross-compiler >> > on a simple linux VM? > > Apart from the fact that I still cannot understand why > building with a cross-compiler in a VM can be easier than > doing a native mingw build: For years, this was one of the > most often requested "features", probably to allow > debugging within msvc. For me it was easy to setup a linux machine on the LAN here and use the all the tools and libraries associated with that. So it's an easy step to get required libraries, tools, etc., and build in that fashion. The main motivation for using MSVC to build ffmpeg at this point for me would be to be able to debug ffmpeg from MSVC. Does this work or not ? Another motivation might be to get static link builds to work instead of DLL. Anyone know if that works ? static link builds from cross-compile linux have link problems when linking in MSVC. I was able to get a static link to work with a minimal build but it was always something so gave up on it. From john.orr at scala.com Tue Mar 12 20:08:05 2013 From: john.orr at scala.com (John Orr) Date: Tue, 12 Mar 2013 15:08:05 -0400 Subject: [Libav-user] Building with MSVC toolchain resulting in seekingproblem? In-Reply-To: References: Message-ID: <513F7D15.1010103@scala.com> On 3/12/2013 2:51 PM, Don Moir wrote: > > The main motivation for using MSVC to build ffmpeg at this point for > me would be to be able to debug ffmpeg from MSVC. Does this work or not ? > It mostly works for me when I disabled Frame Pointer Omission: */Oy-* http://msdn.microsoft.com/en-us/library/2kxx5t2c%28v=vs.100%29.aspx I also set the debug information format to use PDB: /Zi http://msdn.microsoft.com/en-us/library/958x11bc%28v=vs.100%29.aspx The configuration script seem to generate both -Oy and -Oy- flags (as well as both -Zi and -Z7), so, in config.mak, I had to manually remove -Oy and -Z7 from CFLAGS and replace -Z7 in config.mak with -Zi. I don't yet understand the configuration script well enough to know how to fix that properly, so I did it manually. The Visual Studio 2010 debugger is able to navigate through calls through the ffmpeg dlls this way. Sometimes it does not see the local variables. > Another motivation might be to get static link builds to work instead > of DLL. Anyone know if that works ? static link builds from > cross-compile linux have link problems when linking in MSVC. I was > able to get a static link to work with a minimal build but it was > always something so gave up on it. > Sorry, I didn't try that. --Johno -------------- next part -------------- An HTML attachment was scrubbed... URL: From donmoir at comcast.net Tue Mar 12 20:32:19 2013 From: donmoir at comcast.net (Don Moir) Date: Tue, 12 Mar 2013 15:32:19 -0400 Subject: [Libav-user] Building with MSVC toolchain resulting inseekingproblem? References: <513F7D15.1010103@scala.com> Message-ID: <718D1C59351F46CEAAAEE7D77C31B9AA@MANLAP> From: John Orr To: libav-user at ffmpeg.org Sent: Tuesday, March 12, 2013 3:08 PM Subject: Re: [Libav-user] Building with MSVC toolchain resulting inseekingproblem? On 3/12/2013 2:51 PM, Don Moir wrote: >>The main motivation for using MSVC to build ffmpeg at this point for me would be to be able to debug ffmpeg from MSVC. Does this work or not ? >It mostly works for me when I disabled Frame Pointer Omission: /Oy- >http://msdn.microsoft.com/en-us/library/2kxx5t2c%28v=vs.100%29.aspx >I also set the debug information format to use PDB: /Zi >http://msdn.microsoft.com/en-us/library/958x11bc%28v=vs.100%29.aspx >The configuration script seem to generate both -Oy and -Oy- flags (as well as both -Zi and -Z7), >so, in config.mak, I had to manually remove -Oy and -Z7 from CFLAGS and replace -Z7 in config.mak with -Zi. >I don't yet understand the configuration script well enough to know how to fix that properly, so I did it manually. >The Visual Studio 2010 debugger is able to navigate through calls through the ffmpeg dlls this way. >Sometimes it does not see the local variables. Thanks John. Maybe this should be added to the documentation somewhere or other steps taken based on this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rjvbertin at gmail.com Tue Mar 12 20:36:17 2013 From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=) Date: Tue, 12 Mar 2013 20:36:17 +0100 Subject: [Libav-user] Building with MSVC toolchain resulting in seekingproblem? In-Reply-To: <513F7D15.1010103@scala.com> References: <513F7D15.1010103@scala.com> Message-ID: <639C30E1-786E-4F78-840E-3F9F086EC257@gmail.com> On Mar 12, 2013, at 20:08, John Orr wrote: > The configuration script seem to generate both -Oy and -Oy- flags (as well as both -Zi and -Z7), so, in config.mak, I had to manually remove -Oy and -Z7 from CFLAGS and replace -Z7 in config.mak with -Zi. I don't yet understand the configuration script well enough to know how to fix that properly, so I did it manually. > > The Visual Studio 2010 debugger is able to navigate through calls through the ffmpeg dlls this way. Sometimes it does not see the local variables. Of course it should be able, if MSVC compiled the code. The missing local variables are probably because of the optimisation that has to be done for dead code elimination, as explained earlier. (I've already spent some time on an earlier libav version to enable building with -O0, in order to be able to step through the code without optimisation-related surprises. It's possible, but it can take a while.) R From joe.flowers at nofreewill.com Tue Mar 12 21:42:19 2013 From: joe.flowers at nofreewill.com (Joe Flowers) Date: Tue, 12 Mar 2013 16:42:19 -0400 Subject: [Libav-user] Seeing the linker options when make-ing ffmpeg? Message-ID: Hello, I know I am able to see the compiler options used when building ffmpeg with the following command. make V=1 Is there a similar command for seeing the linker options? Or, perhaps some other reasonable way to see the linker options? Thanks! From cehoyos at ag.or.at Tue Mar 12 23:09:12 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 12 Mar 2013 22:09:12 +0000 (UTC) Subject: [Libav-user] Seeing the linker options when make-ing ffmpeg? References: Message-ID: Joe Flowers writes: > I know I am able to see the compiler options used > when building ffmpeg with the following command. > > make V=1 > > Is there a similar command for seeing the linker options? I wanted to answer "make V=1" but perhaps you mean "--extra-ldflags=-Wl,-v" ? Carl Eugen From cehoyos at ag.or.at Tue Mar 12 23:11:04 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 12 Mar 2013 22:11:04 +0000 (UTC) Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? References: Message-ID: Bjoern Drabeck writes: > In most files everything seems to work fine, however > when I have larger MKV files (for example I got one > 15 GB movie file), the seeking can take several minutes This should be fixed in current git head by a patch from Hendrik. Thank you for the report, thank you John Orr for the analysis! Carl Eugen From cehoyos at ag.or.at Tue Mar 12 23:18:38 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 12 Mar 2013 22:18:38 +0000 (UTC) Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? References: <513F475C.1010301@scala.com> Message-ID: John Orr writes: > Parts of ffmpeg source code assume the compiler will remove > the body of a conditional if the condition is always false Could you test if the following fixes compilation with --disable-optimizations with msvc? Insert a line >> _cflags_noopt="-O1" << after the line >> _cflags_size="-O1" << which should be line 2746. If not: Does --enable-small work? Carl Eugen From john.orr at scala.com Wed Mar 13 00:18:02 2013 From: john.orr at scala.com (John Orr) Date: Tue, 12 Mar 2013 19:18:02 -0400 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: <513F475C.1010301@scala.com> Message-ID: <513FB7AA.4080201@scala.com> On 3/12/2013 6:18 PM, Carl Eugen Hoyos wrote: > John Orr writes: > >> Parts of ffmpeg source code assume the compiler will remove >> the body of a conditional if the condition is always false > Could you test if the following fixes compilation with > --disable-optimizations with msvc? > Insert a line >> _cflags_noopt="-O1" << after > the line >> _cflags_size="-O1" << which should be line 2746. I tried this in the 1.1.3 branch version of configure just above the line: # Nonstandard output options, to avoid msys path conversion issues, relies on wrapper to remap it It eventually fails to link: cpu.o : error LNK2019: unresolved external symbol _ff_get_cpu_flags_ppc referenced in function _av_get_cpu_flags cpu.o : error LNK2019: unresolved external symbol _ff_get_cpu_flags_arm referenced in function _av_get_cpu_flags libavutil/avutilmm-52.dll : fatal error LNK1120: 2 unresolved externals make: *** [libavutil/avutilmm-52.dll] Error 1 > If not: Does --enable-small work? I'll try in a little bit. --Johno From john.orr at scala.com Wed Mar 13 04:45:48 2013 From: john.orr at scala.com (John Orr) Date: Tue, 12 Mar 2013 23:45:48 -0400 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: <513FB7AA.4080201@scala.com> References: <513F475C.1010301@scala.com> <513FB7AA.4080201@scala.com> Message-ID: <513FF66C.1020003@scala.com> On 3/12/2013 7:18 PM, John Orr wrote: > >> If not: Does --enable-small work? > > I'll try in a little bit. > Nope. --enable-small gets the same link error. --Johno From cehoyos at ag.or.at Wed Mar 13 08:39:52 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Wed, 13 Mar 2013 07:39:52 +0000 (UTC) Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? References: <513F475C.1010301@scala.com> <513FB7AA.4080201@scala.com> <513FF66C.1020003@scala.com> Message-ID: John Orr writes: > On 3/12/2013 7:18 PM, John Orr wrote: > > > >> If not: Does --enable-small work? > > > > I'll try in a little bit. > > > > Nope. --enable-small gets the same link error. I suspect if this is fixed, it will be easier to understand how to map -O0 to to something that works in msvc_flags() in configure. I looked here to find out what could make the difference: http://msdn.microsoft.com/en-us/library/k1ack8f1%28v=vs.100%29.aspx But did not find anything obvious, testing will be necessary. Carl Eugen From bjoern.drabeck at gmail.com Wed Mar 13 08:52:41 2013 From: bjoern.drabeck at gmail.com (Bjoern Drabeck) Date: Wed, 13 Mar 2013 15:52:41 +0800 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: Message-ID: > > In most files everything seems to work fine, however > > when I have larger MKV files (for example I got one > > 15 GB movie file), the seeking can take several minutes > > This should be fixed in current git head by a patch > from Hendrik. > > Ok, verified, just made a build from latest code on GIT, and seeking seems to work fine now! Thanks all for your help and the patch! > Thank you for the report, thank you John Orr for > the analysis! > > Btw I think there are still a couple more issues with the configure going wrong sometimes, depending what options I choose. Bit later when I got more time I can create a list of options and outcomes.. Will try to see if I can get a debug build to work which allows me to step into the code, with the feedback from John > Carl Eugen > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.leppkes at gmail.com Wed Mar 13 09:45:31 2013 From: h.leppkes at gmail.com (Hendrik Leppkes) Date: Wed, 13 Mar 2013 09:45:31 +0100 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: <513F475C.1010301@scala.com> <513FB7AA.4080201@scala.com> <513FF66C.1020003@scala.com> Message-ID: On Wed, Mar 13, 2013 at 8:39 AM, Carl Eugen Hoyos wrote: > John Orr writes: > >> On 3/12/2013 7:18 PM, John Orr wrote: >> > >> >> If not: Does --enable-small work? >> > >> > I'll try in a little bit. >> > >> >> Nope. --enable-small gets the same link error. > > I suspect if this is fixed, it will be easier to > understand how to map -O0 to to something that > works in msvc_flags() in configure. > > I looked here to find out what could make the difference: > http://msdn.microsoft.com/en-us/library/k1ack8f1%28v=vs.100%29.aspx > But did not find anything obvious, testing will be necessary. > MSVC optimization options are not as fine-grained as other compilers. The option that turns on dead-code elim also turns on all sorts of other optimizations that strip out the local variable infos. What i do when i want to debug a specific piece of code is to turn the optimizer off for that part of the code, which can be done with a compiler pragma like this: #pragma optimize("", off) And any code following it will not be optimized at all, while keeping all other files optimized. Like people pointed out before, the only thing really required is to turn off frame pointer suppression, otherwise debugging is no fun at all. I have a local patch for that, which makes --enable-debug build a debuggable MSVC version with all the appropriate options (frame pointers, proper debug info generation), i'll send it to the ML in a few days, after all the merge noise has died down a bit and i rebased my local patches on top of it. - Hendrik From joe.flowers at nofreewill.com Wed Mar 13 15:45:53 2013 From: joe.flowers at nofreewill.com (Joe Flowers) Date: Wed, 13 Mar 2013 10:45:53 -0400 Subject: [Libav-user] Seeing the linker options when make-ing ffmpeg? In-Reply-To: References: Message-ID: Sweet Carl!!!!!!!!! That worked!!!! Thank you!!!!!!!!!!!!!! On Tue, Mar 12, 2013 at 6:09 PM, Carl Eugen Hoyos wrote: > Joe Flowers writes: > >> I know I am able to see the compiler options used >> when building ffmpeg with the following command. >> >> make V=1 >> >> Is there a similar command for seeing the linker options? > > I wanted to answer "make V=1" but perhaps you mean > "--extra-ldflags=-Wl,-v" ? > > Carl Eugen From brado at bighillsoftware.com Wed Mar 13 20:37:18 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Wed, 13 Mar 2013 12:37:18 -0700 Subject: [Libav-user] Posting a sample project on Github Message-ID: <14DD3B35-6C3C-4A85-B289-4E31AD25FD8E@bighillsoftware.com> Hey guys, My efforts to capture from a MacBook Pro camera and microphone using QTKit, followed by encoding to to FLV and streaming (with which a number of you have offered some very appreciated assistance) are very close, but not quite complete. In a nutshell, I've been able to proceed past my previous audio resampling errors to the place where I now have both video and audio being converted (video), resampled (audio), encoded, and streamed. All runs error-free, and the video looks great, but the audio is just complete junk -- sounds like whitenoise...pure static, no trace of the proper audio that should be recorded. I've worked with this code and scoured the Internet and various blogs / sites for a few a number of days now with hopes for insight on the issue. My gut tells me that the problem at this point is probably ridiculously simple, a pointer wrong or a few lines of code not quite right. As I've encountered others having similar cluelessness, and no apparent "here's how it is done" answer, and both phrasing questions and identifying the source of the problem at this point is starting to become a bit difficult, I thought that I could knock out two birds with one stone by posting a sample app on Github...both for getting help debugging the problem, and leaving a prototype up there which others can reference which will point the way for how to accomplish this with FFmpeg. So in order to do this, and respect FFmpeg etiquette and licensing terms, is it desired that I include with my source code on Github: a) Just FFmpeg built binaries. b) FFmpeg built binaries plus unarchived FFmpeg source. c) FFmpeg built binaries plus archived source. If someone could lend me a tip as to what the maintainers would like to see, I would greatly appreciate it. Some time thereafter I hope I can get a runnable Mac app posted which encapsulates exactly what I'm doing, that hopefully will give the audio gurus on this list an easily identifiable issue they can point out. Thanks for your help...let me know on the posting guidelines. Brad From mybrokenbeat at gmail.com Wed Mar 13 23:30:09 2013 From: mybrokenbeat at gmail.com (Oleg) Date: Thu, 14 Mar 2013 00:30:09 +0200 Subject: [Libav-user] Posting a sample project on Github In-Reply-To: <14DD3B35-6C3C-4A85-B289-4E31AD25FD8E@bighillsoftware.com> References: <14DD3B35-6C3C-4A85-B289-4E31AD25FD8E@bighillsoftware.com> Message-ID: <98D81E89-1997-4929-8745-182DD906F529@gmail.com> Just make ffmpeg as a "git submodule" in your repo and distribute your code without any binaries\sources in repo. And about solving your problem, first find where is the problem: In mic? In QTKit API? In ffmpeg API? Don't try to "repair" whole system when you don't know what exactly doesn't work. Good luck 13.03.2013, ? 21:37, Brad O'Hearne ???????(?): > Hey guys, > > My efforts to capture from a MacBook Pro camera and microphone using QTKit, followed by encoding to to FLV and streaming (with which a number of you have offered some very appreciated assistance) are very close, but not quite complete. In a nutshell, I've been able to proceed past my previous audio resampling errors to the place where I now have both video and audio being converted (video), resampled (audio), encoded, and streamed. All runs error-free, and the video looks great, but the audio is just complete junk -- sounds like whitenoise...pure static, no trace of the proper audio that should be recorded. > > I've worked with this code and scoured the Internet and various blogs / sites for a few a number of days now with hopes for insight on the issue. My gut tells me that the problem at this point is probably ridiculously simple, a pointer wrong or a few lines of code not quite right. As I've encountered others having similar cluelessness, and no apparent "here's how it is done" answer, and both phrasing questions and identifying the source of the problem at this point is starting to become a bit difficult, I thought that I could knock out two birds with one stone by posting a sample app on Github...both for getting help debugging the problem, and leaving a prototype up there which others can reference which will point the way for how to accomplish this with FFmpeg. > > So in order to do this, and respect FFmpeg etiquette and licensing terms, is it desired that I include with my source code on Github: > > a) Just FFmpeg built binaries. > b) FFmpeg built binaries plus unarchived FFmpeg source. > c) FFmpeg built binaries plus archived source. > > If someone could lend me a tip as to what the maintainers would like to see, I would greatly appreciate it. Some time thereafter I hope I can get a runnable Mac app posted which encapsulates exactly what I'm doing, that hopefully will give the audio gurus on this list an easily identifiable issue they can point out. > > Thanks for your help...let me know on the posting guidelines. > > Brad > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user From gerardcl at gmail.com Thu Mar 14 13:25:58 2013 From: gerardcl at gmail.com (Gerard C.L.) Date: Thu, 14 Mar 2013 13:25:58 +0100 Subject: [Libav-user] aac encoder in real time scenario Message-ID: Hi all, I'm developing an AAC encoder in a real time environment. The scene is: - Capture format -> PCM: 48kHz, stereo, 16b/sample. at 25fps -> so, per frame, 7680Bytes have to be encoded. The first problem become when I realised that the encoder works on fixed chunk sizes (in this case, for the audio configuration, the size is 4096Bytes per chunk). So, working like a file encoder, I was only encoding 4096bytes of the 7680 per frame. The solution was implementing FIFOs, using the av_fifo_.. methods. So now, I can hear the entire captured sound per frame, but I hear some garbage and I don't know if it's because of the encoder or how I work with the fifo or if I have conceptual errors in my mind. To note that I'm playing the sound after saving it to a file, could it be also the problem? I'm copying the piece of code I've implemented right now, I'd love if some one gets the error... I'm so noob... -----------------------------------8<------------------------------------------------------------------ int audio_avcodec_encode(struct audio_avcodec_encode_state *aavces, unsigned char *inbuf, unsigned char *outbuf, int inbufsize) { AVPacket pkt; int frameBytes; int outsize = 0; int packetSize = 0; int ret; int nfifoBytes; int encBytes = 0; int sizeTmp = 0; frameBytes = aavces->c->frame_size * aavces->c->channels * 2; av_fifo_realloc2(aavces->fifo_buf,av_fifo_size(aavces->fifo_buf) + inbufsize); // Put the raw audio samples into the FIFO. ret = av_fifo_generic_write(aavces->fifo_buf, /*(int8_t*)*/inbuf, inbufsize, NULL ); printf("\n[avcodec encode] raw buffer intput size: %d ; fifo size: %d",inbufsize, ret); //encoding each frameByte block while ((ret = av_fifo_size(aavces->fifo_buf)) >= frameBytes) { ret = av_fifo_generic_read(aavces->fifo_buf, aavces->fifo_outbuf,frameBytes, NULL ); av_init_packet(&pkt); pkt.size = avcodec_encode_audio(aavces->c, aavces->outbuf,aavces->outbuf_size, (int16_t*) aavces->fifo_outbuf); if (pkt.size < 0) { printf("FFmpeg : ERROR - Can't encode audio frame."); } // Rescale from the codec time_base to the AVStream time_base. if (aavces->c->coded_frame && aavces->c->coded_frame->pts != (int64_t) (AV_NOPTS_VALUE )) pkt.pts = av_rescale_q(aavces->c->coded_frame->pts,aavces->c->time_base, aavces->c->time_base); printf("\nFFmpeg : (%d) Writing audio frame with PTS: %lld.",aavces->c->frame_number, pkt.pts); printf("\n[avcodec - audio - encode] Encoder returned %d bytes of data",pkt.size); pkt.data = aavces->outbuf; pkt.flags |= AV_PKT_FLAG_KEY; memcpy(outbuf, pkt.data, pkt.size); } // any bytes left in audio FIFO to encode? nfifoBytes = av_fifo_size(aavces->fifo_buf); printf("\n[avcodec encode] raw buffer intput size: %d", nfifoBytes); if (nfifoBytes > 0) { memset(aavces->fifo_outbuf, 0, frameBytes); if (aavces->c->codec->capabilities & CODEC_CAP_SMALL_LAST_FRAME) { int nFrameSizeTmp = aavces->c->frame_size; if (aavces->c->frame_size != 1 && (aavces->c->codec->capabilities & CODEC_CAP_SMALL_LAST_FRAME)) aavces->c->frame_size = nfifoBytes / (aavces->c->channels * 2); if (av_fifo_generic_read(aavces->fifo_buf, aavces->fifo_outbuf,nfifoBytes, NULL ) == 0) { if (aavces->c->frame_size != 1) encBytes = avcodec_encode_audio(aavces->c, aavces->outbuf,aavces->outbuf_size,(int16_t*) aavces->fifo_outbuf); else encBytes = avcodec_encode_audio(aavces->c, aavces->outbuf,nfifoBytes, (int16_t*) aavces->fifo_outbuf); } aavces->c->frame_size = nFrameSizeTmp;// restore the native frame size } else printf("\n[audio encoder] codec does not support small frames"); } // Now flush the encoder. if (encBytes <= 0){ encBytes = avcodec_encode_audio(aavces->c, aavces->outbuf,aavces->outbuf_size, NULL ); printf("\nFFmpeg : flushing the encoder"); } if (encBytes < 0) { printf("\nFFmpeg : ERROR - Can't encode LAST audio frame."); } av_init_packet(&pkt); sizeTmp = pkt.size; pkt.size = encBytes; pkt.data = aavces->outbuf; pkt.flags |= AV_PKT_FLAG_KEY; // Rescale from the codec time_base to the AVStream time_base. if (aavces->c->coded_frame && aavces->c->coded_frame->pts != (int64_t) (AV_NOPTS_VALUE )) pkt.pts = av_rescale_q(aavces->c->coded_frame->pts,aavces->c->time_base, aavces->c->time_base); printf("\nFFmpeg : (%d) Writing audio frame with PTS: %lld.",aavces->c->frame_number, pkt.pts); printf("\n[avcodec - audio - encode] Encoder returned %d bytes of data\n",pkt.size); memcpy(outbuf + sizeTmp, pkt.data, pkt.size); outsize = sizeTmp + pkt.size; return outsize; } -------------------------------------------------->8------------------------------------------------- Then, I'm saving outbuf with outsize per frame encoded. Any idea of what I'm doing wrong? Thanks in advance! -------------------- Gerard C.L. -------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirti at voxvalley.com Thu Mar 14 14:18:56 2013 From: kirti at voxvalley.com (kirti) Date: Thu, 14 Mar 2013 18:48:56 +0530 Subject: [Libav-user] Using zeranoe ffmpeg build in visual studio 2008 Message-ID: <5141CE40.9090504@voxvalley.com> Hi all, I am trying to use FFmpeg shared build from zeranoe in visual studio 2008. I have tested my application(pjsip stack ) using ffmpeg in debug mode and it is running fine. but when i am trying to build the application through release version then it compiles fine but while running it is showing error as "The procedure entry point CoCreateInstance could not be located in the dynamic link library avcodec-54.dll" Can any one please help me to solve this issue. thanks and regards Kirti -------------- next part -------------- An HTML attachment was scrubbed... URL: From imrank at cdac.in Thu Mar 14 20:28:18 2013 From: imrank at cdac.in (Imran Khan) Date: Fri, 15 Mar 2013 00:58:18 +0530 (IST) Subject: [Libav-user] Using zeranoe ffmpeg build in visual studio 2008 In-Reply-To: <5141CE40.9090504@voxvalley.com> Message-ID: this is seems a linker reference problem. to solve this goto your-application->properties->Linker->Optimization->References and select OPT/NOREF. follow the following link to use ffmpeg in visual studio 2008. http://www.ffmpeg.org/platform.html On Thu, Mar 14, 2013, kirti said: > This is a multi-part message in MIME format. > --------------030501010900060400080604 > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > Content-Transfer-Encoding: 7bit > > Hi all, > > I am trying to use FFmpeg shared build from zeranoe in visual studio > 2008. I have tested my application(pjsip stack ) using ffmpeg in debug mode and it > is running fine. but when i am trying to build the application through > release version then it compiles fine but while running it is showing > error as "The procedure entry point CoCreateInstance could not be located > in the dynamic link library avcodec-54.dll" Can any one please help me to > solve this issue. > > thanks and regards > Kirti > > > --------------030501010900060400080604 > Content-Type: text/html; charset=ISO-8859-1 > Content-Transfer-Encoding: 7bit > > > > > > > > Hi all,
>

>
I am trying to use FFmpeg shared build from zeranoe in visual studio
> 2008. I have tested my application(pjsip stack ) using ffmpeg in debug mode and it
> is running fine. but when i am trying to build the application through
> release version then it compiles fine but while running it is showing
> error as "The procedure entry point CoCreateInstance could not be located
> in the dynamic link library avcodec-54.dll" Can any one please help me to
> solve this issue.
> 
> thanks and regards
> Kirti
> 
> > > > --------------030501010900060400080604-- > -- Thanks and Regards Imran Khan Project Engineer CDAC Hyderabad ------------------------------------------------------------------------------------------------------------------------------- This e-mail is for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies and the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email is strictly prohibited and appropriate legal action will be taken. ------------------------------------------------------------------------------------------------------------------------------- From gerardcl at gmail.com Fri Mar 15 09:19:26 2013 From: gerardcl at gmail.com (Gerard C.L.) Date: Fri, 15 Mar 2013 09:19:26 +0100 Subject: [Libav-user] aac encoder in real time scenario In-Reply-To: References: Message-ID: Good moring, I've seen that it's necessary to show the init methods, so here you have: ----------------------------------------8<------------------------------------------- int audio_avcodec_init_encode(struct audio_avcodec_encode_state *aavces, int bit_rate, int sample_rate, int channels){ int enabled=0; avcodec_register_all(); aavces->c= NULL; /* find the encoder */ aavces->codec = avcodec_find_encoder(CODEC_ID_AAC); //AQU? STRING *codec, ara AAC default if (!aavces->codec) { fprintf(stderr, "\n[avcodec - audio - encode] Codec not found"); //exit(1); return enabled; }else enabled = 1; aavces->c= avcodec_alloc_context(); /* put sample parameters */ aavces->c->bit_rate = bit_rate;//64000; aavces->c->sample_fmt = AV_SAMPLE_FMT_S16; //aavces->c->channel_layout = AV_CH_LAYOUT_STEREO; aavces->c->sample_rate = sample_rate;//48000; //TODO: get it from dp_map aavces->c->channels = channels;//2; //TODO aavces->c->profile = FF_PROFILE_AAC_MAIN;//FF_PROFILE_AAC_LOW; //aavces->c->time_base = (AVRational){1, sample_rate}; aavces->c->time_base.num = 1; aavces->c->time_base.den = sample_rate; aavces->c->codec_type = AVMEDIA_TYPE_AUDIO; /* open it */ if (avcodec_open(aavces->c, aavces->codec) < 0) { fprintf(stderr, "\n[avcodec - audio - encode] Could not open codec"); //exit(1); return enabled; }else enabled = 1; /* the codec gives us the frame size, in samples */ //aavces->frame_size = aavces->c->frame_size; //aavces->samples = malloc(aavces->frame_size * 2 * aavces->c->channels); aavces->outbuf_size = 1024;//FF_MIN_BUFFER_SIZE * 10; aavces->outbuf = (uint8_t *)av_malloc(aavces->outbuf_size); aavces->fifo_buf = av_fifo_alloc(2*MAX_AUDIO_PACKET_SIZE);//FF_MIN_BUFFER_SIZE); aavces->fifo_outbuf = (uint8_t *)av_malloc(MAX_AUDIO_PACKET_SIZE); if (!(aavces->outbuf == NULL))enabled = 1; printf("\n[avcodec - audio - encode] Enabled!",enabled); return enabled; } ------------------------------->8------------------------------------------------------ Anyone can help me, please? Hope not being a concept problem... Thanks, -------------------- Gerard C.L. -------------------- 2013/3/14 Gerard C.L. > Hi all, > > I'm developing an AAC encoder in a real time environment. > > The scene is: > - Capture format -> PCM: 48kHz, stereo, 16b/sample. at 25fps -> so, per > frame, 7680Bytes have to be encoded. > > The first problem become when I realised that the encoder works on fixed > chunk sizes (in this case, for the audio configuration, the size is > 4096Bytes per chunk). So, working like a file encoder, I was only encoding > 4096bytes of the 7680 per frame. > The solution was implementing FIFOs, using the av_fifo_.. methods. So now, > I can hear the entire captured sound per frame, but I hear some garbage and > I don't know if it's because of the encoder or how I work with the fifo or > if I have conceptual errors in my mind. To note that I'm playing the sound > after saving it to a file, could it be also the problem? > > I'm copying the piece of code I've implemented right now, I'd love if some > one gets the error... I'm so noob... > > > -----------------------------------8<------------------------------------------------------------------ > int audio_avcodec_encode(struct audio_avcodec_encode_state *aavces, > unsigned char *inbuf, unsigned char *outbuf, int inbufsize) { > AVPacket pkt; > int frameBytes; > int outsize = 0; > int packetSize = 0; > int ret; > int nfifoBytes; > int encBytes = 0; > int sizeTmp = 0; > > frameBytes = aavces->c->frame_size * aavces->c->channels * 2; > av_fifo_realloc2(aavces->fifo_buf,av_fifo_size(aavces->fifo_buf) + > inbufsize); > > // Put the raw audio samples into the FIFO. > ret = av_fifo_generic_write(aavces->fifo_buf, /*(int8_t*)*/inbuf, > inbufsize, NULL ); > > printf("\n[avcodec encode] raw buffer intput size: %d ; fifo size: > %d",inbufsize, ret); > > //encoding each frameByte block > while ((ret = av_fifo_size(aavces->fifo_buf)) >= frameBytes) { > ret = av_fifo_generic_read(aavces->fifo_buf, > aavces->fifo_outbuf,frameBytes, NULL ); > > av_init_packet(&pkt); > > pkt.size = avcodec_encode_audio(aavces->c, > aavces->outbuf,aavces->outbuf_size, (int16_t*) aavces->fifo_outbuf); > > if (pkt.size < 0) { > printf("FFmpeg : ERROR - Can't encode audio frame."); > } > // Rescale from the codec time_base to the AVStream time_base. > if (aavces->c->coded_frame && aavces->c->coded_frame->pts != > (int64_t) (AV_NOPTS_VALUE )) > pkt.pts = > av_rescale_q(aavces->c->coded_frame->pts,aavces->c->time_base, > aavces->c->time_base); > > printf("\nFFmpeg : (%d) Writing audio frame with PTS: > %lld.",aavces->c->frame_number, pkt.pts); > printf("\n[avcodec - audio - encode] Encoder returned %d bytes of > data",pkt.size); > > pkt.data = aavces->outbuf; > pkt.flags |= AV_PKT_FLAG_KEY; > > memcpy(outbuf, pkt.data, pkt.size); > } > > // any bytes left in audio FIFO to encode? > nfifoBytes = av_fifo_size(aavces->fifo_buf); > > printf("\n[avcodec encode] raw buffer intput size: %d", nfifoBytes); > > if (nfifoBytes > 0) { > memset(aavces->fifo_outbuf, 0, frameBytes); > if (aavces->c->codec->capabilities & CODEC_CAP_SMALL_LAST_FRAME) { > int nFrameSizeTmp = aavces->c->frame_size; > if (aavces->c->frame_size != 1 && > (aavces->c->codec->capabilities & CODEC_CAP_SMALL_LAST_FRAME)) > aavces->c->frame_size = nfifoBytes / (aavces->c->channels > * 2); > > if (av_fifo_generic_read(aavces->fifo_buf, > aavces->fifo_outbuf,nfifoBytes, NULL ) == 0) { > if (aavces->c->frame_size != 1) > encBytes = avcodec_encode_audio(aavces->c, > aavces->outbuf,aavces->outbuf_size,(int16_t*) aavces->fifo_outbuf); > else > encBytes = avcodec_encode_audio(aavces->c, > aavces->outbuf,nfifoBytes, (int16_t*) aavces->fifo_outbuf); > } > aavces->c->frame_size = nFrameSizeTmp;// restore the native > frame size > } else > printf("\n[audio encoder] codec does not support small > frames"); > } > > // Now flush the encoder. > if (encBytes <= 0){ > encBytes = avcodec_encode_audio(aavces->c, > aavces->outbuf,aavces->outbuf_size, NULL ); > printf("\nFFmpeg : flushing the encoder"); > } > if (encBytes < 0) { > printf("\nFFmpeg : ERROR - Can't encode LAST audio frame."); > } > av_init_packet(&pkt); > > sizeTmp = pkt.size; > > pkt.size = encBytes; > pkt.data = aavces->outbuf; > pkt.flags |= AV_PKT_FLAG_KEY; > > // Rescale from the codec time_base to the AVStream time_base. > if (aavces->c->coded_frame && aavces->c->coded_frame->pts != (int64_t) > (AV_NOPTS_VALUE )) > pkt.pts = > av_rescale_q(aavces->c->coded_frame->pts,aavces->c->time_base, > aavces->c->time_base); > > printf("\nFFmpeg : (%d) Writing audio frame with PTS: > %lld.",aavces->c->frame_number, pkt.pts); > printf("\n[avcodec - audio - encode] Encoder returned %d bytes of > data\n",pkt.size); > > memcpy(outbuf + sizeTmp, pkt.data, pkt.size); > > outsize = sizeTmp + pkt.size; > > return outsize; > } > > -------------------------------------------------->8------------------------------------------------- > > > Then, I'm saving outbuf with outsize per frame encoded. > > Any idea of what I'm doing wrong? > > Thanks in advance! > -------------------- > Gerard C.L. > -------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scdmusic at gmail.com Fri Mar 15 21:43:47 2013 From: scdmusic at gmail.com (John Locke) Date: Fri, 15 Mar 2013 16:43:47 -0400 Subject: [Libav-user] Encode just one frame [H264] or others? Message-ID: Is there a way I can encode just one frame to display? I would like to Encode a single H264 from and then be able to decode it. If this is not possible with H264 is there another way? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mybrokenbeat at gmail.com Sat Mar 16 00:00:08 2013 From: mybrokenbeat at gmail.com (Oleg) Date: Sat, 16 Mar 2013 01:00:08 +0200 Subject: [Libav-user] Encode just one frame [H264] or others? In-Reply-To: References: Message-ID: <89CC065D-6ED7-41FE-9278-A315D8306291@gmail.com> You definitely can encode one frame into one h264 key-frame, just specify ffmpeg's output format as h264. And you also can decode it via ffmpeg. Probably you need some JPEG for storing single frame? 15.03.2013, ? 22:43, John Locke ???????(?): > Is there a way I can encode just one frame to display? I would like to Encode a single H264 from and then be able to decode it. If this is not possible with H264 is there another way? > > Thanks > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user From cehoyos at ag.or.at Sat Mar 16 09:34:49 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Sat, 16 Mar 2013 08:34:49 +0000 (UTC) Subject: [Libav-user] Encode just one frame [H264] or others? References: Message-ID: John Locke writes: > Is there a way I can encode just one frame to display?? > I would like to Encode a single H264 from and then be > able to decode it. This works fine here with $ ffmpeg -i input -vframes 1 out.h264 Carl Eugen From me at renecalles.de Sun Mar 17 16:15:49 2013 From: me at renecalles.de (=?iso-8859-1?Q?Ren=E9_Calles?=) Date: Sun, 17 Mar 2013 16:15:49 +0100 Subject: [Libav-user] How to get started with libav Message-ID: <48AF6746-9874-40E6-84B5-10FB9092A1AB@renecalles.de> Dear Developers, i would like to ask you for help about pointing me where to start ( except for programming language ) to understand the basics in programmatically using Libav in general. I would like to understand the general basics and hope someone could point me to some resource where i can find that. So, from my actual understanding there are the following steps: 1. Demuxing of a file / stream => output raw stream 2. Decode audio / video => output samples / frames 3. Do filtering 4. Encode audio / video => output raw streams 5. Multiplex audio / video => output file / stream Is this even right? Am i missing anything? What could / should i do or know to understand those single steps except of reading the ffmpeg code ;) ? Thanks a lot for all your help and of course your work you already did. Hope to be able to sent some patches in the near future too. Ren? From nicolas.george at normalesup.org Sun Mar 17 19:20:23 2013 From: nicolas.george at normalesup.org (Nicolas George) Date: Sun, 17 Mar 2013 19:20:23 +0100 Subject: [Libav-user] How to get started with libav In-Reply-To: <48AF6746-9874-40E6-84B5-10FB9092A1AB@renecalles.de> References: <48AF6746-9874-40E6-84B5-10FB9092A1AB@renecalles.de> Message-ID: <20130317182023.GA16626@phare.normalesup.org> Le septidi 27 vent?se, an CCXXI, Ren? Calles a ?crit?: > i would like to ask you for help about pointing me where to start ( except > for programming language ) to understand the basics in programmatically > using Libav in general. I would like to understand the general basics and > hope someone could point me to some resource where i can find that. Did you have a look at the doc directory in the source tree, especially the examples subdirectory? > So, from my actual understanding there are the following steps: > > 1. Demuxing of a file / stream => output raw stream After demuxing, you get "packets". > 2. Decode audio / video => output samples / frames > 3. Do filtering > 4. Encode audio / video => output raw streams Same as before, you get packets. > 5. Multiplex audio / video => output file / stream > > Is this even right? Apart from the packets thing, it looks right. Regards, -- Nicolas George -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From xuanyu.huang at gmail.com Mon Mar 18 02:47:34 2013 From: xuanyu.huang at gmail.com (=?GB2312?B?u8bQ+dPu?=) Date: Mon, 18 Mar 2013 12:47:34 +1100 Subject: [Libav-user] about channel_layout in AVCodecContext Message-ID: Hi Guys, I'm using libswresample to convert audio between different sample format (planar format to packed format). To init swcontext I need input audio channel_layout. But today I found a video, when avcode_open called on it to get audio codec, the returned AVCodecContext has channels field 1, but channle_layout 0. Since 0 channle_layout will cause swcontext init error, I'd like to know what 1 channels but 0 channel_layout means and if there's a workaround so that I can call sw resample successfully? Great thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjoern.drabeck at gmail.com Mon Mar 18 03:06:23 2013 From: bjoern.drabeck at gmail.com (Bjoern Drabeck) Date: Mon, 18 Mar 2013 10:06:23 +0800 Subject: [Libav-user] about channel_layout in AVCodecContext In-Reply-To: References: Message-ID: > > > I'm using libswresample to convert audio between different sample format > (planar format to packed format). > > To init swcontext I need input audio channel_layout. > > But today I found a video, when avcode_open called on it to get audio > codec, the returned AVCodecContext > has channels field 1, but channle_layout 0. > > Since 0 channle_layout will cause swcontext init error, I'd like to know > what 1 channels but 0 channel_layout means > and if there's a workaround so that I can call sw resample successfully? > > If your channel layout is 0, I think you can just simply fall back to using av_get_default_channel_layout(codecCtx->channels) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brado at bighillsoftware.com Mon Mar 18 04:49:01 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Sun, 17 Mar 2013 20:49:01 -0700 Subject: [Libav-user] av_interleaved_write_frame vs av_write_frame Message-ID: <20ABAF0F-BD00-49B0-9F9C-D3FE2171D27F@bighillsoftware.com> I am attempting to stream AVPackets with AVFrame data containing either video or audio data. The video format in play is FLV. The audio and video are being captured by QTKit -- which provides separate callbacks to deliver video samples and audio samples. My question revolves around the nature of av_interleaved_write_frame vs. av_write_frame. Presently, if I stream video using av_write_frame, it appears to be received and able to be played successfully on the other end. However, if I stream video using av_write_interleaved, I just get black video on the other end. While the audio data appears to be filling packets and streaming successfully (even thought the actual audio data is garbage and not what I want -- that's another problem I'm trying to figure out and have posted about on this list), it appears that only av_write_frame allows the video to come through on the other side. My question is this: what requirement is there to use one call over the other, and assuming for a moment that av_interleaved_write_frame is to be used when there is both video and audio (versus just one or the other), how is the fact that there's no guarantee when audio samples or video frames will be delivered (or if they are delivered at all) affect streaming? QTKit might be delivering both audio and video samples continually, or video only, or audio only -- there's no guarantee of absolutely either, and even if both are delivered, there's no guarantee of when samples will arrive. If someone can enlighten me a little bit to the nature of these calls, and if/when one is required over the other, or if they are merely optional, that would be great. Thanks, Brad From nicolas.george at normalesup.org Mon Mar 18 13:18:10 2013 From: nicolas.george at normalesup.org (Nicolas George) Date: Mon, 18 Mar 2013 13:18:10 +0100 Subject: [Libav-user] about channel_layout in AVCodecContext In-Reply-To: References: Message-ID: <20130318121810.GA16033@phare.normalesup.org> L'octidi 28 vent?se, an CCXXI, ??? a ?crit?: > But today I found a video, when avcode_open called on it to get audio > codec, the returned AVCodecContext > has channels field 1, but channle_layout 0. > > Since 0 channle_layout will cause swcontext init error, I'd like to know Are you sure of that? IIRC, lswr is perfectly capable of dealing with unknown channel layouts. You have to provide the channel count, of course. > what 1 channels but 0 channel_layout means It means 1 channel, but we do not know which one, it may be the left channel of a stereo stream that was split, or more or less anything. Regards, -- Nicolas George From jorgepblank at gmail.com Mon Mar 18 01:25:39 2013 From: jorgepblank at gmail.com (=?UTF-8?Q?Jorge_Israel_Pe=C3=B1a?=) Date: Sun, 17 Mar 2013 17:25:39 -0700 Subject: [Libav-user] libswresample vs libavfilter for target format conversion Message-ID: Hey, this is my first time decoding audio and I'm currently trying to figure out the best way to convert audio to the target format. My understanding is that when the audio is decoded, in order to play it through the sound system (with say pulseaudio), it should be converted to the sound system's target format if it isn't already in that format, like say if my target format is two channels, S16 non-planar, 44100hz. After consulting the ffplay source I found that they use libswresample. However I've also found mention of using libavfilter for this purpose. I have no experience with libavfilter but it seems to me like a very recent commit to ffplay adds support for it: https://github.com/FFmpeg/FFmpeg/commit/e96175ad7b576ad57b83d399193ef10b2bb016ae so I can probably learn from that. It seems like it uses the 'abuffer' filter, is this the one to use for this case? It doesn't strike me (from the doxygen) as a filter used for conversion. It seems to me like it's setup as an `abuffer` for the input and an `abuffersink` for the output, and the `abuffersink`'s parameters describe the target format. So when a frame is put in and pulled out from the filter graph it's automatically converted to that format due to the parameters passed to 'abuffersink'? I had also previously seen someone say to use the 'aformat' filter for this purpose. Is there a difference? Looking at the list of filters, I also see "aconvert: Sample format and channel layout conversion audio filter" and "resample: Sample format and channel layout conversion audio filter". So I see there's abuffer, aformat, aconvert, and resample, so I'm pretty confused here and would really appreciate any clarification. Then again I know nothing about the whole filter system so this may be something very simple/obvious. Aside from that, what are the benefits of using libavfilter for converting to the target format over using libswresample? It seems to me like after the initial setup for avfilter, the actual conversion process seems a lot simpler, simply consisting of putting frames and pulling them out of the filter graph. Is this the benefit? Thanks, I would really appreciate some clarification for this. -- - Jorge Israel Pe?a -------------- next part -------------- An HTML attachment was scrubbed... URL: From onemda at gmail.com Tue Mar 19 09:57:14 2013 From: onemda at gmail.com (Paul B Mahol) Date: Tue, 19 Mar 2013 08:57:14 +0000 Subject: [Libav-user] libswresample vs libavfilter for target format conversion In-Reply-To: References: Message-ID: On 3/18/13, Jorge Israel Pena wrote: > Hey, this is my first time decoding audio and I'm currently trying to > figure out the best way to convert audio to the target format. > > My understanding is that when the audio is decoded, in order to play it > through the sound system (with say pulseaudio), it should be converted to > the sound system's target format if it isn't already in that format, like > say if my target format is two channels, S16 non-planar, 44100hz. After > consulting the ffplay source I found that they use libswresample. However > I've also found mention of using libavfilter for this purpose. I have no > experience with libavfilter but it seems to me like a very recent commit to > ffplay adds support for it: > https://github.com/FFmpeg/FFmpeg/commit/e96175ad7b576ad57b83d399193ef10b2bb016ae > so > I can probably learn from that. No you can not. That commit is for using audio filters via ffplay and have nothing to do with libswresample. > > It seems like it uses the 'abuffer' filter, is this the one to use for this > case? It doesn't strike me (from the doxygen) as a filter used for > conversion. It seems to me like it's setup as an `abuffer` for the input > and an `abuffersink` for the output, and the `abuffersink`'s parameters > describe the target format. So when a frame is put in and pulled out from > the filter graph it's automatically converted to that format due to the > parameters passed to 'abuffersink'? > > I had also previously seen someone say to use the 'aformat' filter for this > purpose. Is there a difference? Looking at the list of filters, I also see > "aconvert: Sample format and channel layout conversion audio filter" and > "resample: Sample format and channel layout conversion audio filter". So I > see there's abuffer, aformat, aconvert, and resample, so I'm pretty > confused here and would really appreciate any clarification. Then again I > know nothing about the whole filter system so this may be something very > simple/obvious. > > Aside from that, what are the benefits of using libavfilter for converting > to the target format over using libswresample? It seems to me like after > the initial setup for avfilter, the actual conversion process seems a lot > simpler, simply consisting of putting frames and pulling them out of the > filter graph. Is this the benefit? Again you are wrong. It is trivial to convert beetween various sample formats with libswresample. And you do not need to use libavfilter's filters for this. ffmpeg/ffplay just use filter(s) to convert it to wanted format but you are not required to use filter way, it is just another abstraction of libswresample. > > Thanks, I would really appreciate some clarification for this. > > -- > - Jorge Israel Pena > From cehoyos at ag.or.at Tue Mar 19 10:37:48 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 19 Mar 2013 09:37:48 +0000 (UTC) Subject: [Libav-user] libswresample vs libavfilter for target format conversion References: Message-ID: Jorge Israel Pe?a writes: > Aside from that, what are the benefits of using libavfilter > for converting to the target format over using libswresample? I believe the main benefit is that if you want to (also) use another filter, you only need to open one filtergraph. If you are not using another audio filter, there probably is no benefit (except that you may or may not prefer the filter interface over the libswresample interface). Carl Eugen From rjvbertin at gmail.com Tue Mar 19 15:08:20 2013 From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=) Date: Tue, 19 Mar 2013 15:08:20 +0100 Subject: [Libav-user] libswresample vs libavfilter for target format conversion In-Reply-To: References: Message-ID: On Mar 19, 2013, at 10:37, Carl Eugen Hoyos wrote: > > I believe the main benefit is that if you want to (also) > use another filter, you only need to open one filtergraph. > If you are not using another audio filter, there probably So, out of curiosity, that is why ffplay uses lavfi instead of lswr? R From onemda at gmail.com Tue Mar 19 16:55:01 2013 From: onemda at gmail.com (Paul B Mahol) Date: Tue, 19 Mar 2013 15:55:01 +0000 Subject: [Libav-user] libswresample vs libavfilter for target format conversion In-Reply-To: References: Message-ID: On 3/19/13, "Rene J.V. Bertin" wrote: > On Mar 19, 2013, at 10:37, Carl Eugen Hoyos wrote: >> >> I believe the main benefit is that if you want to (also) >> use another filter, you only need to open one filtergraph. >> If you are not using another audio filter, there probably > > So, out of curiosity, that is why ffplay uses lavfi instead of lswr? Not at all, ffplay used libswresample directly long before mentioned patch. (Yes, there is still places for improvements.) Mentioned patch just adds -af support to ffplay wich was previously missing. > > R > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > From jorgepblank at gmail.com Tue Mar 19 10:41:37 2013 From: jorgepblank at gmail.com (=?UTF-8?Q?Jorge_Israel_Pe=C3=B1a?=) Date: Tue, 19 Mar 2013 02:41:37 -0700 Subject: [Libav-user] libswresample vs libavfilter for target format conversion In-Reply-To: References: Message-ID: > > I believe the main benefit is that if you want to (also) > use another filter, you only need to open one filtergraph. > If you are not using another audio filter, there probably > is no benefit (except that you may or may not prefer the > filter interface over the libswresample interface). Ah, okay. Thanks, I appreciate the clarification. On Tue, Mar 19, 2013 at 2:37 AM, Carl Eugen Hoyos wrote: > Jorge Israel Pe?a writes: > > > Aside from that, what are the benefits of using libavfilter > > for converting to the target format over using libswresample? > > I believe the main benefit is that if you want to (also) > use another filter, you only need to open one filtergraph. > If you are not using another audio filter, there probably > is no benefit (except that you may or may not prefer the > filter interface over the libswresample interface). > > Carl Eugen > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > -- - Jorge Israel Pe?a -------------- next part -------------- An HTML attachment was scrubbed... URL: From jorgepblank at gmail.com Tue Mar 19 10:19:39 2013 From: jorgepblank at gmail.com (=?UTF-8?Q?Jorge_Israel_Pe=C3=B1a?=) Date: Tue, 19 Mar 2013 02:19:39 -0700 Subject: [Libav-user] libswresample vs libavfilter for target format conversion In-Reply-To: References: Message-ID: > > No you can not. That commit is for using audio filters via ffplay and > have nothing > to do with libswresample. I was saying that I could probably use that commit to learn how to use libavfilter, not libswresample. > ffmpeg/ffplay just use filter(s) to convert it to wanted format but you > are not required to use filter way, it is just another abstraction of > libswresample. Thanks, this was my source of confusion. I had seen other mailing list messages in which people suggested using filters instead of libswresample for conversion. I wasn't aware that audio filters in that case were just an abstraction of libswresample. And you do not need to use libavfilter's filters for this. Indeed my application already used libswresample and worked perfectly. I was just confused as to why it seemed some people recommended using filters instead, and what the difference was. Thanks again, I appreciate the help. -- - Jorge Israel Pe?a -------------- next part -------------- An HTML attachment was scrubbed... URL: From hskim095 at naver.com Wed Mar 20 03:46:44 2013 From: hskim095 at naver.com (=?UTF-8?B?6rmA7Z2s7IiZ?=) Date: Wed, 20 Mar 2013 11:46:44 +0900 (KST) Subject: [Libav-user] =?utf-8?q?Is_it_possible_to_send_H=2E264_data_using_?= =?utf-8?q?TCP_data=3F_not_RTP?= Message-ID: Hello, Is it possible to send H.264 data using TCP data? not RTP. I mean if H.264 NAL and VCL is sent to TCP data, the data is sent H.264 decoder directly. Can H.264 decoder decode properly? Best Regards, HSK -------------- next part -------------- An HTML attachment was scrubbed... URL: From mybrokenbeat at gmail.com Wed Mar 20 08:22:11 2013 From: mybrokenbeat at gmail.com (Oleg) Date: Wed, 20 Mar 2013 09:22:11 +0200 Subject: [Libav-user] Is it possible to send H.264 data using TCP data? not RTP In-Reply-To: References: Message-ID: <1E2CE9E3-447B-4682-AD50-E163584E0528@gmail.com> Of course, it can. Why it shouldn't? 20.03.2013, ? 4:46, ??? ???????(?): > > Hello, > > > Is it possible to send H.264 data using TCP data? not RTP. > > > I mean if H.264 NAL and VCL is sent to TCP data, the data is sent H.264 decoder directly. > > Can H.264 decoder decode properly? > > > Best Regards, > > HSK > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From hskim095 at naver.com Wed Mar 20 08:31:36 2013 From: hskim095 at naver.com (=?UTF-8?B?6rmA7Z2s7IiZ?=) Date: Wed, 20 Mar 2013 16:31:36 +0900 (KST) Subject: [Libav-user] =?utf-8?q?Is_it_possible_to_send_H=2E264_data_using_?= =?utf-8?q?TCP_data=3F_not_RTP?= In-Reply-To: References: Message-ID: <716323168bd7fbf8d85d26338b2745ce@tweb07.nm.nhnsystem.com> Hello, This problem is solved. Thank you. Best Regards, HSK -----Original Message----- From: "???"<hskim095 at naver.com> To: <libav-user at ffmpeg.org>; Cc: Sent: 2013-03-20 (?) 11:46:44 Subject: [Libav-user]Is it possible to send H.264 data using TCP data? not RTP Hello, Is it possible to send H.264 data using TCP data? not RTP. I mean if H.264 NAL and VCL is sent to TCP data, the data is sent H.264 decoder directly. Can H.264 decoder decode properly? Best Regards, HSK -------------- next part -------------- An HTML attachment was scrubbed... URL: From rene.calles at yahoo.de Tue Mar 19 21:34:04 2013 From: rene.calles at yahoo.de (=?iso-8859-1?Q?Ren=E9_Calles?=) Date: Tue, 19 Mar 2013 21:34:04 +0100 Subject: [Libav-user] How to get started with libav Message-ID: <7774089D-A06E-47C6-8AF3-4287FAE0BDF0@yahoo.de> Dear Developers, i would like to ask you for help about pointing me where to start ( except for programming language ) to understand the basics in programmatically using Libav in general. I would like to understand the general basics and hope someone could point me to some resource where i can find that. So, from my actual understanding there are the following steps: 1. Demuxing of a file / stream => output raw stream 2. Decode audio / video => output samples / frames 3. Do filtering 4. Encode audio / video => output raw streams 5. Multiplex audio / video => output file / stream Is this even right? Am i missing anything? What could / should i do or know to understand those single steps except of reading the ffmpeg code ;) ? Thanks a lot for all your help and of course your work you already did. Hope to be able to sent some patches in the near future too. Ren? From mczarnek at objectvideo.com Wed Mar 20 22:07:01 2013 From: mczarnek at objectvideo.com (Czarnek, Matt) Date: Wed, 20 Mar 2013 17:07:01 -0400 Subject: [Libav-user] H264 misses every 16th packet Message-ID: Hello, I am connecting to an H264 stream and streaming video from it. When streaming from one camera it works, when streaming from the other one, it seems like exactly every 16th frame I am getting an error message along the lines of: [NULL @ 000bc800] RTP:missed 152 packets [h264 @ 000b7bc0] error while decoding MB 19 16, bytestream (-14) [h264 @ 000b7bc0] concealing 6270 DC, 6270 AC, 6270 MV error in I frame The numbers are often slightly different. Both streams are correctly streamed by VLC. Any thoughts as to how I could fix this? Thank you, Matt -- Matt Czarnek, Software Engineer Work Phone: (760) 4-OBJVID aka: (760) 462-5843 Cell Phone: HAHAHOORAY ObjectVideo Inc. http://www.objectvideo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at renecalles.de Thu Mar 21 07:32:25 2013 From: me at renecalles.de (=?iso-8859-1?Q?Ren=E9_Calles?=) Date: Thu, 21 Mar 2013 07:32:25 +0100 Subject: [Libav-user] How to get started with libav basics Message-ID: <2918A13B-70CF-4255-B2FC-0B213B3A5277@renecalles.de> Dear Developers, i would like to ask you for help about pointing me where to start ( except for programming language ) to understand the basics in programmatically using Libav in general. I would like to understand the general basics and hope someone could point me to some resource where i can find that. So, from my actual understanding there are the following steps: 1. Demuxing of a file / stream => output raw stream 2. Decode audio / video => output samples / frames 3. Do filtering 4. Encode audio / video => output raw streams 5. Multiplex audio / video => output file / stream Is this even right? Am i missing anything? What could / should i do or know to understand those single steps except of reading the ffmpeg code ;) ? Thanks a lot for all your help and of course your work you already did. Hope to be able to sent some patches in the near future too. Ren? From cehoyos at ag.or.at Thu Mar 21 14:37:56 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Thu, 21 Mar 2013 13:37:56 +0000 (UTC) Subject: [Libav-user] How to get started with libav basics References: <2918A13B-70CF-4255-B2FC-0B213B3A5277@renecalles.de> Message-ID: Ren? Calles writes: > i would like to ask you for help about pointing me where to > start ( except for programming language ) to understand the > basics in programmatically using Libav in general. I would > like to understand the general basics and hope someone could > point me to some resource where i can find that. Please download current FFmpeg and look into doc/examples: http://ffmpeg.org/download.html Carl Eugen From aeseldyrx at gmail.com Thu Mar 21 14:47:26 2013 From: aeseldyrx at gmail.com (aeseldyrx) Date: Thu, 21 Mar 2013 14:47:26 +0100 Subject: [Libav-user] Help with a couple of bitmap scaling issues Message-ID: Hi, I've recently been testing image decoding and scaling with FFmpeg (compiled with mingw as dlls on Windows 7 x64), and everything has been working smoothly with the exception of two problems. Both problems are seemingly related to down-scaling of large images. As mentioned in the subject, I experienced these issues while testing bmp files. 1. The first issue happens, when I try down-scaling a large bitmap (4608x3328) to a much smaller size (e.g. 256x185). The resulting image resembles the original, however all colours have become slightly more green during the scaling. Basically the output looks as if it has gotten a green tint/overlay. Initial testing was done with sws_scale via a small test application I've written. The conversion was from BGR24 to BGR24. After noticing the problem, I consequently tried to reproduce the issue with FFmpeg.exe (Zeranoe's build), and the results were identical. To reproduce: Get a large bitmap such as this: http://wa8lmf.net/MapCaptureTool/Google-Terrain-SoCal-Zoom-12.htm Use FFmpeg.exe with the following command: ffmpeg.exe -i path\to\image -s 256x185 path\to\output.bmp To see a properly coloured output, change the output extension to .jpg instead. Am I doing something wrong here, or could this be an sws_scale bgr2bgr error? 2. The second issue has to do with extreme down-scaling. If we use the linked image from above as an example. If I were to scale it down to something as small as 64x46, I would get an error telling me to increase the MAX_FILTER_SIZE, to accomplish such extreme scaling. Now, if I increase the MAX_FILTER_SIZE, the extreme scaling works, but the output is corrupt, and normal scaling of other images results in a crash. I simply doubled the size of MAX_FILTER_SIZE, to keep the format. Perhaps that was the wrong action? Would it be better to simply perform consecutively smaller down-scale operations on the image until the destination size has been reached? Oh! And one very interesting note: Both issues were resolved, when I used FFmpeg dlls compiled with the MSVC toolchain (same config). Hmm... I will happily provide additional information if required (build configuration etc). Thanks for your help, and for FFmpeg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.george at normalesup.org Thu Mar 21 15:01:35 2013 From: nicolas.george at normalesup.org (Nicolas George) Date: Thu, 21 Mar 2013 15:01:35 +0100 Subject: [Libav-user] Help with a couple of bitmap scaling issues In-Reply-To: References: Message-ID: <20130321140135.GA2752@phare.normalesup.org> Le primidi 1er germinal, an CCXXI, aeseldyrx a ?crit?: > 1. The first issue happens, when I try down-scaling a large bitmap > (4608x3328) to a much smaller size (e.g. 256x185). > The resulting image resembles the original, however all colours have become > slightly more green during the scaling. > Basically the output looks as if it has gotten a green tint/overlay. > > Initial testing was done with sws_scale via a small test application I've > written. The conversion was from BGR24 to BGR24. > After noticing the problem, I consequently tried to reproduce the issue > with FFmpeg.exe (Zeranoe's build), and the results were identical. > > To reproduce: > Get a large bitmap such as this: > http://wa8lmf.net/MapCaptureTool/Google-Terrain-SoCal-Zoom-12.htm > Use FFmpeg.exe with the following command: ffmpeg.exe -i path\to\image -s > 256x185 path\to\output.bmp > > To see a properly coloured output, change the output extension to .jpg > instead. > > Am I doing something wrong here, or could this be an sws_scale bgr2bgr > error? I am seeing the same greenish tint with BMP output and not with PNG output using ffmpeg built with gcc for Linux. Note: the green tint happens with "format=rgba,scale=256x185" but not with "scale=256x185,format=rgba", so the problem is when rgba is the output format. > 2. The second issue has to do with extreme down-scaling. If we use the > linked image from above as an example. > If I were to scale it down to something as small as 64x46, I would get an > error telling me to increase the MAX_FILTER_SIZE, > to accomplish such extreme scaling. > Now, if I increase the MAX_FILTER_SIZE, the extreme scaling works, but the > output is corrupt, and normal scaling of other images > results in a crash. > > I simply doubled the size of MAX_FILTER_SIZE, to keep the format. Perhaps > that was the wrong action? > Would it be better to simply perform consecutively smaller down-scale > operations on the image until the destination size has been reached? IMHO, lsws driving code should detect this kind of extreme scaling and split it automatically into manageable steps. I have no idea what would be the more efficient / aesthetic way of downscaling: 1/72 = 1/8 ? 1/9? 1/2 ? 1/36? 1/36 ? 1/2? People who know the scaling algorithms better than me can answer. Regards, -- Nicolas George -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From me at renecalles.de Thu Mar 21 17:48:31 2013 From: me at renecalles.de (=?utf-8?Q?Ren=C3=A9_Calles?=) Date: Thu, 21 Mar 2013 17:48:31 +0100 Subject: [Libav-user] How to get started with libav basics In-Reply-To: References: <2918A13B-70CF-4255-B2FC-0B213B3A5277@renecalles.de> Message-ID: <109C1474-A72D-4D10-BAB6-5BD4C6E7A817@renecalles.de> Thank you Carl, i already had a look at them and couldn't figure out where to start. Do you have any advice? Thanks for your help. Ren? Am 21.03.2013 um 14:37 schrieb Carl Eugen Hoyos : > Ren? Calles writes: > >> i would like to ask you for help about pointing me where to >> start ( except for programming language ) to understand the >> basics in programmatically using Libav in general. I would >> like to understand the general basics and hope someone could >> point me to some resource where i can find that. > > Please download current FFmpeg and look into doc/examples: > http://ffmpeg.org/download.html > > Carl Eugen > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user From mczarnek at objectvideo.com Thu Mar 21 19:02:14 2013 From: mczarnek at objectvideo.com (Czarnek, Matt) Date: Thu, 21 Mar 2013 14:02:14 -0400 Subject: [Libav-user] Setting the bitrate of a rtsp stream to be read Message-ID: Hello, I am reading a video stream coming in via RTSP, when I set it on the other end to 10 fps, it will read in successfully. However, when I set it to stream at 15 fps, then it drops every 16th frame, which seems to be an iframe. So the question is, how do I change it so that I can support higher bit rates? Thank you, Matt -- Matt Czarnek, Software Engineer Work Phone: (760) 4-OBJVID aka: (760) 462-5843 ObjectVideo Inc. http://www.objectvideo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From aeseldyrx at gmail.com Thu Mar 21 22:30:41 2013 From: aeseldyrx at gmail.com (aeseldyrx) Date: Thu, 21 Mar 2013 22:30:41 +0100 Subject: [Libav-user] Help with a couple of bitmap scaling issues In-Reply-To: <20130321140135.GA2752@phare.normalesup.org> References: <20130321140135.GA2752@phare.normalesup.org> Message-ID: Thanks for the prompt reply, Nicolas. Should I report the green tint bug on trac? Any idea, why this doesn't happen, when FFmpeg is compiled with MSVC? > IMHO, lsws driving code should detect this kind of extreme scaling and split > it automatically into manageable steps. I have no idea what would be the > more efficient / aesthetic way of downscaling: 1/72 = 1/8 ? 1/9? 1/2 ? 1/36? > 1/36 ? 1/2? People who know the scaling algorithms better than me can > answer. Thanks, I'll do it this way then. Any further suggestions on how to implement this optimally, are of course more than welcome. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.george at normalesup.org Thu Mar 21 22:46:35 2013 From: nicolas.george at normalesup.org (Nicolas George) Date: Thu, 21 Mar 2013 22:46:35 +0100 Subject: [Libav-user] Help with a couple of bitmap scaling issues In-Reply-To: References: <20130321140135.GA2752@phare.normalesup.org> Message-ID: <20130321214634.GA1268@phare.normalesup.org> Le primidi 1er germinal, an CCXXI, aeseldyrx a ?crit?: > Should I report the green tint bug on trac? I just did: https://ffmpeg.org/trac/ffmpeg/ticket/2394 > Any idea, why this doesn't happen, when FFmpeg is compiled with MSVC? Not at all. Possibly assembly optimization that are not used. Can you compare speed? > Thanks, I'll do it this way then. > Any further suggestions on how to implement this optimally, are of course > more than welcome. If you do extensive comparisons to find out the best way of splitting the scaling, please report the results. Regards, -- Nicolas George -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From xuanyu.huang at gmail.com Fri Mar 22 02:40:39 2013 From: xuanyu.huang at gmail.com (=?GB2312?B?u8bQ+dPu?=) Date: Fri, 22 Mar 2013 12:40:39 +1100 Subject: [Libav-user] what negative start_time means Message-ID: Hi Guys, I met a problem when opening a testing ts file. https://dl.dropbox.com/u/89678527/aavv_mpeg2video_30_yuv420p_dar4x3_sar10x11_ac3_48000_2_1.ts The start_time field of all 4 streams (2 video streams and 2 audio streams) are negative values. Does anyone knows what this means? Great thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjoern.drabeck at gmail.com Fri Mar 22 09:26:20 2013 From: bjoern.drabeck at gmail.com (Bjoern Drabeck) Date: Fri, 22 Mar 2013 16:26:20 +0800 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: <513F475C.1010301@scala.com> References: <513F475C.1010301@scala.com> Message-ID: > > I have got that to build, however compared to builds >> > from the zeranoe site (and also builds I have asked a >> > friend of mine to make for me using mingw with gcc), >> > I always end up with seeking problems. >> >> This is surprising. >> Are you sure that you are testing the same versions? >> > > I have downloaded the zeranoe build marked as 1.1.3 and I also got > http://ffmpeg.org/releases/ffmpeg-1.1.3.tar.bz2 and built that myself.. > so I would say it's the same version. However I got the same problem with > previous versions too (tried 1.0.1, and 1.1 for example). > > >> Did you try to disable optimizations? >> >> For some reason I get build errors as soon as I > use --disable-optimizations: > > LD libavutil/avutil-52.dll > Creating library libavutil/avutil.lib and object libavutil/avutil.exp > cpu.o : error LNK2019: unresolved external symbol _ff_get_cpu_flags_ppc > referenced in function _av_get_cpu_flags > cpu.o : error LNK2019: unresolved external symbol _ff_get_cpu_flags_arm > referenced in function _av_get_cpu_flags > libavutil/avutil-52.dll : fatal error LNK1120: 2 unresolved externals > make: *** [libavutil/avutil-52.dll] Error 1 > > If I don't disable optimizations I don't get that and it builds fine... > but no idea about that (I have never really looked into the ffmpeg code > except for the public headers) > > > > Parts of ffmpeg source code assume the compiler will remove the body of a > conditional if the condition is always false, for example from > libavutil.c/av_get_cpu_flags(): > > int av_get_cpu_flags(void) > { > if (checked) > return flags; > > if (ARCH_ARM) flags = ff_get_cpu_flags_arm(); > if (ARCH_PPC) flags = ff_get_cpu_flags_ppc(); > if (ARCH_X86) flags = ff_get_cpu_flags_x86(); > > checked = 1; > return flags; > } > > > If ARCH_ARM is the constant 0, the code assumes this reference to > ff_get_cpu_flags_arm() will disappear. Treats that as an optimization, so > if you turn off optimizations, the compiler will generate code to call > ff_get_cpu_flags_arm, but that function won't exist if ARCH_ARM is false. > > To get around that, I've used flags like these to compile a less optimized > version for testing purposes: > > --toolchain=msvc --optflags='-Zi -Og -Oy- -arch:SSE2' --extra-cflags='-Gy > -MDd' --extra-ldflags='-OPT:REF -DEBUG -VERBOSE' --enable-shared > > I've been using VC10. The thing that's handy for me is that it generates > .pdb files (via the -Zi flag) and I can mostly step through code with the > VC10 debugger. > > I had to modify the config.mak to get rid of some conflicting flags, > running the configuration script would add -Z7 (which contradicts -Zi). It > also would add -Oy which is the opposite of -Oy-, so I manually removed it. > > --Johno > > I have had some time to play around with the configure (but please bear in mind that this was my first time ever modifying a configure, so I am not quite sure these things make sense - but please feel free to correct me, am happy to learn more about this!). Anyway, my goal was to come to a configuration as described by John above, so that I can step through code with the MSVC debugger, and so far seems to work (ie compiled fine, and also seems can debug, but need to test more, but probably only after the weekend) I tried this on the 1.2 release, with the following configuration options: --prefix= --disable-encoders --disable-muxers --enable-hwaccels --enable-dxva2 --enable-shared --disable-static --toolchain=msvc --enable-small --disable-optimizations --enable-debug=3 I don't need encoders or muxers, so I skipped those, I enabled dxva2 (although it seems it gets enabled by default anyway, not sure though), I used msvc toolchain with debug enabled, and optimizations disabled but enabled small (that might be a bit misleading as it leads to -O1 -Oy- which is an optimization) Anyway, please see below my changes for the configure, as I said I don't know if they are really "correct" or might have side effects under other circumstances, but at least for me it worked to compile without getting any errors - feedback welcome! Around line 2430: added OPT:REF, DEBUG, and VERBOSE options for debug configuration under MSVC msvc) cc_default="c99wrap cl" ld_default="c99wrap link" + if enabled debug; then + add_ldflags -OPT:REF -DEBUG -VERBOSE + fi nm_default="dumpbin -symbols" ar_default="lib" target_os_default="win32" ;; Around line 2740: where it set the _cflags_speed and _cflags_size I added check for debug configuration as well, and under debug added -Oy- I also appended -MD (under release) and -MDd -Zi (under debug) to the _cflags elif $_cc 2>&1 | grep -q Microsoft; then _type=msvc _ident=$($cc 2>&1 | head -n1) _DEPCMD='$(DEP$(1)) $(DEP$(1)FLAGS) $($(1)DEP_FLAGS) $< 2>&1 | awk '\''/including/ { sub(/^.*file: */, ""); gsub(/\\/, "/"); if (!match($$0, / /)) print "$@:", $$0 }'\'' > $(@:.o=.d)' _DEPFLAGS='$(CPPFLAGS) $(CFLAGS) -showIncludes -Zs' + if enabled debug; then + _cflags_speed="-O2 -Oy-" + _cflags_size="-O1 -Oy-" + else _cflags_speed="-O2" _cflags_size="-O1" + fi # Nonstandard output options, to avoid msys path conversion issues, relies on wrapper to remap it if $_cc 2>&1 | grep -q Linker; then _ld_o='-out $@' else _ld_o='-Fe$@' fi _cc_o='-Fo $@' _cc_e='-P -Fi $@' _flags_filter=msvc_flags _ld_lib='lib%.a' _ld_path='-libpath:' _flags='-nologo' _cflags='-D_USE_MATH_DEFINES -Dinline=__inline -FIstdlib.h -Dstrtoll=_strtoi64' if [ $pfx = hostcc ]; then append _cflags -Dsnprintf=_snprintf fi + if enabled debug; then + append _cflags -MDd -Zi + else + append _cflags -MD + fi disable stripping fi Around line 4060: to get rid of some warnings about unknown options I added a check for msvc there + if disabled msvc; then enabled debug && add_cflags -g"$debuglevel" && add_asflags -g"$debuglevel" + fi So do these changes make sense? What can I improve? Anyone willing to try it (maybe John?) best regards Bjoern -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashwini.dicholkar at accenture.com Fri Mar 22 07:29:09 2013 From: ashwini.dicholkar at accenture.com (ashwini.dicholkar at accenture.com) Date: Fri, 22 Mar 2013 06:29:09 +0000 Subject: [Libav-user] Video watermarking using ffmpeg Message-ID: <1768F08D7F16754998D3EF2FAEFDB2A74176A2@048-CH1MPN1-065.048d.mgd.msft.net> Hi, Need help for watermarking video files in web application. Is ffmpeg supports video watermarking? If yes, Please let us know code example to watermark video file using ffmpeg. Thanks and regards Ashwini ________________________________ This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy. ______________________________________________________________________________________ www.accenture.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhanghongqiang at xiaomi.com Fri Mar 22 19:55:14 2013 From: zhanghongqiang at xiaomi.com (=?gb2312?B?1cW668e/?=) Date: Fri, 22 Mar 2013 18:55:14 +0000 Subject: [Libav-user] How to extract a H264 video frame (container is 3GPP/MP4) into a JPG File (with libavformat and libavcode) ? Message-ID: HI, Recently, I?m doing development to generate video clips? thumbnail pic file. But, it is hard without development guide, would you please show me some example code for it? The attachment is sample video. And it is need to rotate with the Stream MetaData: rotate: 90 Thank you very much. P.S. I'm try to send a sample video clip in the attachment, but it is over 40K size limit, rejected by the admin. ?? -------------------------------------------------------------------------------------------------------------------------------------- Freeman Mobile: 13811178263 E-Mail: zhanghongqiang at xiaomi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Fri Mar 22 20:33:35 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Fri, 22 Mar 2013 19:33:35 +0000 (UTC) Subject: [Libav-user] How to extract a H264 video frame (container is 3GPP/MP4) into a JPG File (with libavformat and libavcode) ? References: Message-ID: ??? writes: > Recently, I?m doing development to generate video clips? > thumbnail pic file. But, it is hard without development > guide, would you please show me some example code for it? First confirm that ffmpeg (the command line application) allows you to do what you want, then look into doc/examples to learn how to use libavformat and libavcodec. Carl Eugen From zhanghongqiang at xiaomi.com Fri Mar 22 20:43:03 2013 From: zhanghongqiang at xiaomi.com (=?gb2312?B?1cW668e/?=) Date: Fri, 22 Mar 2013 19:43:03 +0000 Subject: [Libav-user] =?gb2312?b?tPC4tDogIEhvdyB0byBleHRyYWN0IGEgSDI2NCB2?= =?gb2312?b?aWRlbyBmcmFtZSAoY29udGFpbmVyIGlzCTNHUFAvTVA0KSBpbnRvIGEgSlBH?= =?gb2312?b?IEZpbGUgKHdpdGggbGliYXZmb3JtYXQgYW5kIGxpYmF2Y29kZSkgPw==?= In-Reply-To: References: , Message-ID: Yeah, ffmpeg -i video_clip.3gp -vframes 1 -ss 00:00:01 -vf "transpose=1" output.jpg (the first frame is not good enough) can give the right picture. but it is slow, cannot hold 200 cps with Xeon CPUs And I explored the sample codes, and the guide line on the internet (most of them not up to date) I know how to open a video file, but I don't know how to get the correct Key Frame and how to output to a JPG file. so the progam like this: Open the video file Get the frame counts, and seek to a good key frame Output to JPG buffer Rotate it Save the buffer to file. Would you please guide me to do it ? Regards, -------------------------------------------------------------------------------------------------------------------------------------- Freeman Mobile: 13811178263 E-Mail: zhanghongqiang at xiaomi.com ________________________________________ ???: libav-user-bounces at ffmpeg.org [libav-user-bounces at ffmpeg.org] ?? Carl Eugen Hoyos [cehoyos at ag.or.at] ????: 2013?3?23? 3:33 ???: libav-user at ffmpeg.org ??: Re: [Libav-user] How to extract a H264 video frame (container is 3GPP/MP4) into a JPG File (with libavformat and libavcode) ? ??? writes: > Recently, I?m doing development to generate video clips? > thumbnail pic file. But, it is hard without development > guide, would you please show me some example code for it? First confirm that ffmpeg (the command line application) allows you to do what you want, then look into doc/examples to learn how to use libavformat and libavcodec. Carl Eugen _______________________________________________ Libav-user mailing list Libav-user at ffmpeg.org http://ffmpeg.org/mailman/listinfo/libav-user From cehoyos at ag.or.at Fri Mar 22 21:23:40 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Fri, 22 Mar 2013 20:23:40 +0000 (UTC) Subject: [Libav-user] =?utf-8?b?562U5aSNOiBIb3cgdG8gZXh0cmFjdCBhIEgyNjQg?= =?utf-8?q?video_frame_=28container_is_3GPP/MP4=29_into_a_JPG_File_?= =?utf-8?q?=28with_libavformat_and_libavcode=29_=3F?= References: , Message-ID: ??? writes: > ffmpeg -i video_clip.3gp -vframes 1 -ss 00:00:01 > -vf "transpose=1" output.jpg > (the first frame is not good enough) Do you mean the first frame is black / does not show the information you would like to have in your output jpg or is there something wrong with the transoding like for example a non-keyframe gets encoded and the output image is broken? If that is the case, please provide your complete, uncut console output and a sample. Please do not top-post here, Carl Eugen From cehoyos at ag.or.at Fri Mar 22 21:25:27 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Fri, 22 Mar 2013 20:25:27 +0000 (UTC) Subject: [Libav-user] Help with a couple of bitmap scaling issues References: <20130321140135.GA2752@phare.normalesup.org> <20130321214634.GA1268@phare.normalesup.org> Message-ID: Nicolas George writes: > > Any idea, why this doesn't happen, when FFmpeg is > > compiled with MSVC? > > Not at all. Possibly assembly optimization that are > not used. The problem comes from asm mmx optimization that does not get compiled with msvc. Carl Eugen From john.orr at scala.com Fri Mar 22 21:44:04 2013 From: john.orr at scala.com (John Orr) Date: Fri, 22 Mar 2013 16:44:04 -0400 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: References: <513F475C.1010301@scala.com> Message-ID: <514CC294.1030202@scala.com> On 3/22/2013 4:26 AM, Bjoern Drabeck wrote: > > So do these changes make sense? What can I improve? Anyone willing to > try it (maybe John?) > I tried this change against the 1.2 release, adding these to my typical configuration command line: --enable-small --disable-optimizations --enable-debug=3 Note that I had to replace an option I had been using in my typical configuration command line, --optflags= -arch:SSE2', and use this instead: --extra-cflags='-arch:SSE2'. --optflags overrides any optimization flags implied by --enable-small (or --enable-speed). With that change, I can compile, run and debug the resulting DLLs. There is a flag defined in the ffmpeg configuraton file, "omit-frame-pointer". In VC10, that option is -Oy (frame pointer omission). For optimized builds (-O1 and -O2), -Oy is implied (According to the VC10 docs at http://msdn.microsoft.com/en-us/library/2kxx5t2c%28v=vs.100%29.aspx). It seems the state of omit-frame-pointer may be out of sync with the flags you set for non-debug builds. The use of the -Oy- option turns off the "frame pointer omission" feature, which you pretty much need if you want visual studio to build a stack backtrace. --Johno From john.orr at scala.com Fri Mar 22 21:48:40 2013 From: john.orr at scala.com (John Orr) Date: Fri, 22 Mar 2013 16:48:40 -0400 Subject: [Libav-user] Building with MSVC toolchain resulting in seeking problem? In-Reply-To: <513FB7AA.4080201@scala.com> References: <513F475C.1010301@scala.com> <513FB7AA.4080201@scala.com> Message-ID: <514CC3A8.3090705@scala.com> On 3/12/2013 7:18 PM, John Orr wrote: > On 3/12/2013 6:18 PM, Carl Eugen Hoyos wrote: >> John Orr writes: >> >>> Parts of ffmpeg source code assume the compiler will remove >>> the body of a conditional if the condition is always false >> Could you test if the following fixes compilation with >> --disable-optimizations with msvc? >> Insert a line >> _cflags_noopt="-O1" << after >> the line >> _cflags_size="-O1" << which should be line 2746. > > I tried this in the 1.1.3 branch version of configure just above the > line: > > # Nonstandard output options, to avoid msys path conversion > issues, relies on wrapper to remap it > > It eventually fails to link: > I'm pretty sure I goofed when I did this test a few weeks ago. I think I left the --optflags='-arch:SSE2' in place, which basically ignores _cflags_noopts and _cflags_size. --Johno From zhanghongqiang at xiaomi.com Fri Mar 22 22:12:54 2013 From: zhanghongqiang at xiaomi.com (=?gb2312?B?1cW668e/?=) Date: Fri, 22 Mar 2013 21:12:54 +0000 Subject: [Libav-user] =?gb2312?b?tPC4tDogCbTwuLQ6IEhvdyB0byBleHRyYWN0IGEg?= =?gb2312?b?SDI2NCB2aWRlbyBmcmFtZSAoY29udGFpbmVyIGlzIDNHUFAvTVA0KSBpbnRv?= =?gb2312?b?IGEgSlBHIEZpbGUgKHdpdGggbGliYXZmb3JtYXQgYW5kIGxpYmF2Y29kZSkg?= =?gb2312?b?Pw==?= In-Reply-To: References: , , Message-ID: That because the most of Mobile Phone records the video with bad quality camra, and the first few frames very dark not the the transposing makes image broken. Regards, -------------------------------------------------------------------------------------------------------------------------------------- Freeman Mobile: 13811178263 E-Mail: zhanghongqiang at xiaomi.com ________________________________________ ???: libav-user-bounces at ffmpeg.org [libav-user-bounces at ffmpeg.org] ?? Carl Eugen Hoyos [cehoyos at ag.or.at] ????: 2013?3?23? 4:23 ???: libav-user at ffmpeg.org ??: Re: [Libav-user] ??: How to extract a H264 video frame (container is 3GPP/MP4) into a JPG File (with libavformat and libavcode) ? ??? writes: > ffmpeg -i video_clip.3gp -vframes 1 -ss 00:00:01 > -vf "transpose=1" output.jpg > (the first frame is not good enough) Do you mean the first frame is black / does not show the information you would like to have in your output jpg or is there something wrong with the transoding like for example a non-keyframe gets encoded and the output image is broken? If that is the case, please provide your complete, uncut console output and a sample. Please do not top-post here, Carl Eugen _______________________________________________ Libav-user mailing list Libav-user at ffmpeg.org http://ffmpeg.org/mailman/listinfo/libav-user From zhanghongqiang at xiaomi.com Fri Mar 22 22:15:20 2013 From: zhanghongqiang at xiaomi.com (=?gb2312?B?1cW668e/?=) Date: Fri, 22 Mar 2013 21:15:20 +0000 Subject: [Libav-user] =?gb2312?b?tPC4tDogIEhlbHAgd2l0aCBhIGNvdXBsZSBvZiBi?= =?gb2312?b?aXRtYXAgc2NhbGluZyBpc3N1ZXM=?= In-Reply-To: References: <20130321140135.GA2752@phare.normalesup.org> <20130321214634.GA1268@phare.normalesup.org>, Message-ID: I'm using it on Linux, not Windows. It's mid night in China, when I get to work, I can provide the complete compile options to you. Regards, -------------------------------------------------------------------------------------------------------------------------------------- Freeman Mobile: 13811178263 E-Mail: zhanghongqiang at xiaomi.com ________________________________________ ???: libav-user-bounces at ffmpeg.org [libav-user-bounces at ffmpeg.org] ?? Carl Eugen Hoyos [cehoyos at ag.or.at] ????: 2013?3?23? 4:25 ???: libav-user at ffmpeg.org ??: Re: [Libav-user] Help with a couple of bitmap scaling issues Nicolas George writes: > > Any idea, why this doesn't happen, when FFmpeg is > > compiled with MSVC? > > Not at all. Possibly assembly optimization that are > not used. The problem comes from asm mmx optimization that does not get compiled with msvc. Carl Eugen _______________________________________________ Libav-user mailing list Libav-user at ffmpeg.org http://ffmpeg.org/mailman/listinfo/libav-user From aeseldyrx at gmail.com Sat Mar 23 01:03:44 2013 From: aeseldyrx at gmail.com (aeseldyrx) Date: Sat, 23 Mar 2013 01:03:44 +0100 Subject: [Libav-user] Help with a couple of bitmap scaling issues In-Reply-To: References: <20130321140135.GA2752@phare.normalesup.org> <20130321214634.GA1268@phare.normalesup.org> Message-ID: Sorry for the delayed reply. > I just did: > https://ffmpeg.org/trac/ffmpeg/ticket/2394 Great! I'll be sure to keep an eye on that ticket. > Not at all. Possibly assembly optimization that are not used. Can you > compare speed? > > The problem comes from asm mmx optimization that > > does not get compiled with msvc. Ah, okay. I wish I could be of more help with finding the root cause, but I have to admit that asm is not my fort?. > If you do extensive comparisons to find out the best way of splitting the > scaling, please report the results. Will do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brado at bighillsoftware.com Sat Mar 23 04:07:44 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Fri, 22 Mar 2013 20:07:44 -0700 Subject: [Libav-user] Need audio help, runnable sample (Was: Posting a sample project on Github) In-Reply-To: <98D81E89-1997-4929-8745-182DD906F529@gmail.com> References: <14DD3B35-6C3C-4A85-B289-4E31AD25FD8E@bighillsoftware.com> <98D81E89-1997-4929-8745-182DD906F529@gmail.com> Message-ID: On Mar 13, 2013, at 3:30 PM, Oleg wrote: > And about solving your problem, first find where is the problem: In mic? In QTKit API? In ffmpeg API? Don't try to "repair" whole system when you don't know what exactly doesn't work. Understood, in fact, that's the motivation for posting the example. At this point, there are no errors being returned anywhere along the way -- all works, the audio is just junk. My belief is that the problem is not with the capture or the streaming, but rather likely with the conversion of the audio from its source sample format. I've finally gotten a runnable app w/source -- I tried to put some time into this so as to deliver a UI and source that would clearly demonstrate the problem I'm experiencing. I would really appreciate it if someone could take a look and let me know what the problem with it is, which most certainly is centered in the audio processing. Here's the source to a runnable Mac app on GitHub: https://github.com/BigHillSoftware/QTFFmpeg WHAT THE APP DOES: - The app presents a UI that shows a video preview and audio level for video/audio being captured by QTKit. Clicking the "Start Streaming" button will open an output format context and will write an FLV file (my target format) to your Desktop. ***NOTE: This app also outputs a log file to the Application Support directory -- I am outputting this log for ease in log storage/exchange -- I just thought I'd mention that in case someone who helps out wants to get rid of the leftovers once removing this app. WHAT IS HAPPENING WHEN THE APP IS RUN: - If I write *only* video frames using the av_write_frame call, the resulting FLV video seems fine (no audio of course, which I need). - If I write both audio and video, the resulting FLV when played gives me audio that is ear-piercing junk, and the video seems to stream fine for about 15 seconds, and then just freezes. - If I write *only* audio, the resulting audio in the FLV is ear-piercing junk. I would be very grateful if any experts out there could just pull this thing and run it, and then take a look at the source. The crux of the FFmpeg API usage is in the class called QTFFAVStreamer. Please do not hesitate to ask me any questions, and feel free to contact me either with a reply to this thread, or offline at my email address of brado at bighillsoftware.com. Again, thank you very much for your help. Regards, Brad From brado at bighillsoftware.com Sat Mar 23 04:14:38 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Fri, 22 Mar 2013 20:14:38 -0700 Subject: [Libav-user] av_interleaved_write_frame vs av_write_frame In-Reply-To: <20ABAF0F-BD00-49B0-9F9C-D3FE2171D27F@bighillsoftware.com> References: <20ABAF0F-BD00-49B0-9F9C-D3FE2171D27F@bighillsoftware.com> Message-ID: On Mar 17, 2013, at 8:49 PM, Brad O'Hearne wrote: > My question is this: what requirement is there to use one call over the other, and assuming for a moment that av_interleaved_write_frame is to be used when there is both video and audio (versus just one or the other), how is the fact that there's no guarantee when audio samples or video frames will be delivered (or if they are delivered at all) affect streaming? QTKit might be delivering both audio and video samples continually, or video only, or audio only -- there's no guarantee of absolutely either, and even if both are delivered, there's no guarantee of when samples will arrive. Perhaps I can rephrase the question so as to make it more clear -- I am interested in two things: 1. When should av_interleaved_write_frame be used vs. av_write frame? 2. What happens if av_interleaved_write_frame is used to combine audio and video, and for some reason the audio drops (no audio is captured), and therefore the calls to av_interleaved_write_frame cease for audio frames, but continue for video frames? Will the writing (and subsequent streaming) continue, or be interrupted? If the latter, how does one deal with audio or video dropping in an interleaved scenario? Thanks, Brad From shashi580 at gmail.com Sat Mar 23 05:35:43 2013 From: shashi580 at gmail.com (Shashi Bhushan) Date: Sat, 23 Mar 2013 10:05:43 +0530 Subject: [Libav-user] video conerviosn using vc++ Message-ID: Hello friends, I want to develop a application that can convert one video file format to other means suppose the if file is in avi to mp4 using h264 and aac with help of c++(windows) application. I have tried for it. i have convert video format but sound is not available. so please help me to solve this problem. if possible please send the sample for it. I have already submitted my coide on forum. so please help me. I am using latest version of ffmpeg. Regards Shashi Bhushan From pauli.suuraho at gmail.com Sun Mar 24 15:23:41 2013 From: pauli.suuraho at gmail.com (Pauli Suuraho) Date: Sun, 24 Mar 2013 16:23:41 +0200 Subject: [Libav-user] What is the correct way to read frames Message-ID: Hello all. I'm a newcomer to use ffmpeg, and I've been bugging my mind about this one thing. I have videoclips (either h264 or prores), and I want to extract all the frames. I'm using 32 bit ffmpeg-20130322-git-e0e8c20-win32 zeranoe shared+dev and Qt compiled using MSVC2010. I'm using the following code to extract the frames. http://pastebin.com/DrC9H6g0 If I have a test clip that is 30 frames, 30fps long, for some reason the lib always extracts only 24 frames and then says end of file. FFmpeg.exe extracts all the frames (for example to PNGs) correctly. So clearly I'm doing something wrong. This happens with every clip I have. I've even tried to repack the clips with ffmpeg but the output is same: last frames won't get noticed. I guess this has something to do with I,P,B-frames, but not sure how. Any help is appreciated! Thanks, Pauli -------------- next part -------------- An HTML attachment was scrubbed... URL: From mybrokenbeat at gmail.com Sun Mar 24 15:37:53 2013 From: mybrokenbeat at gmail.com (Oleg) Date: Sun, 24 Mar 2013 16:37:53 +0200 Subject: [Libav-user] What is the correct way to read frames In-Reply-To: References: Message-ID: <852FBAE4-17D6-4A60-A97A-038993890C75@gmail.com> There can be more than 1 frame per packet and your code assuming that there is only 1 frame per packet. You should do smth like that: AVPacket tmp_pkt; int size; .... tmp_pkt.data = pkt.data; tmp_pkt.size = pkt.size; while ( (size = avcodec_decode_video2(ctx,frame,&got_pict,&tmp_pkt) ) > 0) { store_decoded_frame(frame); tmp_pkt.data += size; tmp_pkt.size -= size; } 24.03.2013, ? 16:23, Pauli Suuraho ???????(?): > Hello all. > > I'm a newcomer to use ffmpeg, and I've been bugging my mind about this one thing. > > I have videoclips (either h264 or prores), and I want to extract all the frames. I'm using 32 bit ffmpeg-20130322-git-e0e8c20-win32 zeranoe shared+dev and Qt compiled using MSVC2010. > > I'm using the following code to extract the frames. > > http://pastebin.com/DrC9H6g0 > > If I have a test clip that is 30 frames, 30fps long, for some reason the lib always extracts only 24 frames and then says end of file. > > FFmpeg.exe extracts all the frames (for example to PNGs) correctly. So clearly I'm doing something wrong. > > This happens with every clip I have. I've even tried to repack the clips with ffmpeg but the output is same: last frames won't get noticed. > > I guess this has something to do with I,P,B-frames, but not sure how. > > Any help is appreciated! > > Thanks, > Pauli > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From pauli.suuraho at gmail.com Sun Mar 24 15:58:26 2013 From: pauli.suuraho at gmail.com (Pauli Suuraho) Date: Sun, 24 Mar 2013 16:58:26 +0200 Subject: [Libav-user] What is the correct way to read frames In-Reply-To: <852FBAE4-17D6-4A60-A97A-038993890C75@gmail.com> References: <852FBAE4-17D6-4A60-A97A-038993890C75@gmail.com> Message-ID: I changed the code to the following while(av_read_frame(pFormatCtx, &packet) >= 0) { avcodec_get_frame_defaults(pFrame); int frameFinished; if(packet.stream_index==i) { // Is this a packet from the video stream -> decode video frame ffmpeg::AVPacket tmp_pkt; tmp_pkt.data = packet.data; tmp_pkt.size = packet.size; qint64 size = 0; while ( (size = avcodec_decode_video2(pCodecCtx,pFrame,&frameFinished,&tmp_pkt) ) > 0) { if(frameFinished) { qDebug() << "Codec reported frame number to be " << pCodecCtx->frame_number; } tmp_pkt.data += size; tmp_pkt.size -= size; } } av_free_packet(&packet); // Free the packet that was allocated by av_read_frame } Now I get 30 times error [h264 @ 000ff140] no picture, and frameFinished = 0 -Pauli On 24 March 2013 16:37, Oleg wrote: > There can be more than 1 frame per packet and your code assuming that > there is only 1 frame per packet. > You should do smth like that: > > AVPacket tmp_pkt; > int size; > > .... > > tmp_pkt.data = pkt.data; > tmp_pkt.size = pkt.size; > > while ( (size = avcodec_decode_video2(ctx,frame,&got_pict,&tmp_pkt) ) > 0) > { > store_decoded_frame(frame); > tmp_pkt.data += size; > tmp_pkt.size -= size; > } > > > 24.03.2013, ? 16:23, Pauli Suuraho ???????(?): > > Hello all. > > I'm a newcomer to use ffmpeg, and I've been bugging my mind about this one > thing. > > I have videoclips (either h264 or prores), and I want to extract all the > frames. I'm using 32 bit ffmpeg-20130322-git-e0e8c20-win32 zeranoe > shared+dev and Qt compiled using MSVC2010. > > I'm using the following code to extract the frames. > > http://pastebin.com/DrC9H6g0 > > If I have a test clip that is 30 frames, 30fps long, for some reason the > lib always extracts only 24 frames and then says end of file. > > FFmpeg.exe extracts all the frames (for example to PNGs) correctly. So > clearly I'm doing something wrong. > > This happens with every clip I have. I've even tried to repack the clips > with ffmpeg but the output is same: last frames won't get noticed. > > I guess this has something to do with I,P,B-frames, but not sure how. > > Any help is appreciated! > > Thanks, > Pauli > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > > > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.george at normalesup.org Sun Mar 24 16:04:21 2013 From: nicolas.george at normalesup.org (Nicolas George) Date: Sun, 24 Mar 2013 16:04:21 +0100 Subject: [Libav-user] What is the correct way to read frames In-Reply-To: References: Message-ID: <20130324150421.GA19344@phare.normalesup.org> Le quartidi 4 germinal, an CCXXI, Pauli Suuraho a ?crit?: > I'm a newcomer to use ffmpeg, and I've been bugging my mind about this one > thing. > > I have videoclips (either h264 or prores), and I want to extract all the > frames. I'm using 32 bit ffmpeg-20130322-git-e0e8c20-win32 zeranoe > shared+dev and Qt compiled using MSVC2010. > > I'm using the following code to extract the frames. > > http://pastebin.com/DrC9H6g0 This code is not very readable, due to the ugly c++ coding style and bloat. > If I have a test clip that is 30 frames, 30fps long, for some reason the > lib always extracts only 24 frames and then says end of file. If I read it correctly, you are forgetting to flush the decoder after the end of the stream. You have to feed empty packets to the decoder until it returns no frame. Note: the decoding example seems slightly invalid for that. Please remember not to top-post on this mailing list; if you do not know what it means, look it up. Regards, -- Nicolas George -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: From pauli.suuraho at gmail.com Sun Mar 24 16:42:15 2013 From: pauli.suuraho at gmail.com (Pauli Suuraho) Date: Sun, 24 Mar 2013 17:42:15 +0200 Subject: [Libav-user] What is the correct way to read frames In-Reply-To: <20130324150421.GA19344@phare.normalesup.org> References: <20130324150421.GA19344@phare.normalesup.org> Message-ID: > This code is not very readable, due to the ugly c++ coding style and bloat. Oh, sorry you had problems reading my code. >If I read it correctly, you are forgetting to flush the decoder after the >end of the stream. You have to feed empty packets to the decoder until it >returns no frame. That's it! demuxing.c example had example how to flush the codec. Now I'm nicely getting all the frames, well at least if there is one frame per packet. -Pauli -------------- next part -------------- An HTML attachment was scrubbed... URL: From mybrokenbeat at gmail.com Sun Mar 24 17:07:18 2013 From: mybrokenbeat at gmail.com (Oleg) Date: Sun, 24 Mar 2013 18:07:18 +0200 Subject: [Libav-user] What is the correct way to read frames In-Reply-To: References: <20130324150421.GA19344@phare.normalesup.org> Message-ID: <45EF74F5-63C6-47B7-9FB5-91A05A645021@gmail.com> Yes, you need to flush decoder also. You had errors while decoding because simple copying AVPacket's data and size no longer works(sorry, it was my fault). It should be like following: uint8_t *orig_ptr; ssize_t orig_size; orig_ptr = pkt.data; orig_size = pkt.size; while (pkt.size > 0 && (size = av_decode_video2()) > 0) { pkt.data += size; pkt.size -= size; } pkt.data = orig_ptr; pkt.size = orig_size; av_free_packet(pkt); 24.03.2013, ? 17:42, Pauli Suuraho ???????(?): > > This code is not very readable, due to the ugly c++ coding style and bloat. > > Oh, sorry you had problems reading my code. > > >If I read it correctly, you are forgetting to flush the decoder after the > >end of the stream. You have to feed empty packets to the decoder until it > >returns no frame. > > That's it! demuxing.c example had example how to flush the codec. > > Now I'm nicely getting all the frames, well at least if there is one frame per packet. > > -Pauli > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From pauli.suuraho at gmail.com Sun Mar 24 18:00:43 2013 From: pauli.suuraho at gmail.com (Pauli Suuraho) Date: Sun, 24 Mar 2013 19:00:43 +0200 Subject: [Libav-user] What is the correct way to read frames In-Reply-To: <45EF74F5-63C6-47B7-9FB5-91A05A645021@gmail.com> References: <20130324150421.GA19344@phare.normalesup.org> <45EF74F5-63C6-47B7-9FB5-91A05A645021@gmail.com> Message-ID: Works perfectly! Thank you both for your help. One last question to make sure I understood everything.. >pkt.data = orig_ptr; >pkt.size = orig_size; Why do you set the data back before you free the packet? I tried without it and it seemed to work. Is it needed? -Pauli -------------- next part -------------- An HTML attachment was scrubbed... URL: From mybrokenbeat at gmail.com Sun Mar 24 18:50:47 2013 From: mybrokenbeat at gmail.com (Oleg) Date: Sun, 24 Mar 2013 19:50:47 +0200 Subject: [Libav-user] What is the correct way to read frames In-Reply-To: References: <20130324150421.GA19344@phare.normalesup.org> <45EF74F5-63C6-47B7-9FB5-91A05A645021@gmail.com> Message-ID: Because in while() you're changing pkt.data field and av_free() in av_free_packet() will free invalid pointer after you complete loop. It may cause crashing or memory leak. 24.03.2013, ? 19:00, Pauli Suuraho ???????(?): > Works perfectly! Thank you both for your help. > > One last question to make sure I understood everything.. > > >pkt.data = orig_ptr; > >pkt.size = orig_size; > > Why do you set the data back before you free the packet? I tried without it and it seemed to work. Is it needed? > > -Pauli > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From xftzg2013 at 126.com Mon Mar 25 10:53:54 2013 From: xftzg2013 at 126.com (xftzg) Date: Mon, 25 Mar 2013 17:53:54 +0800 (CST) Subject: [Libav-user] How to Frame Step Forwards/Backwards? Message-ID: <4433ed0c.d0bf.13da0f7ea98.Coremail.xftzg2013@126.com> Hello, I know it is using the PTS of the frame to seek to frames. I want to perform a 'Step Forwards/Backwards' operation in my player, but the problem is that the seeking to frames are not always working as I expected. int64_t time_stamp = 0; time_stamp = current_pts - frame_span; // frame_span is calculated by avg_frame_rate, and in stream time base search_flag = AVSEEK_FLAG_BACKWARD; int ok = av_seek_frame(format_ctx, video_stream, times_tamp, search_flag); The result would sometimes either jump to the current key frame or several key frames earlier. What should I do to fix it? By the way, is there any convenient way to know the index of the frame, which is now playing? Best Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Mon Mar 25 11:42:53 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Mon, 25 Mar 2013 10:42:53 +0000 (UTC) Subject: [Libav-user] How to Frame Step Forwards/Backwards? References: <4433ed0c.d0bf.13da0f7ea98.Coremail.xftzg2013@126.com> Message-ID: xftzg writes: > I want to perform a 'Step Forwards/Backwards' operation in my player This is not a trivial task. I believe there are two main approaches: You can cache as many frames as you believe the user will want to step backwards or you seek backwards to the next keyframe and decode (again) until the requested frame (this may take considerable time). If your problem is that you cannot seek backwards to a keyframe, please post code that allows to test this. Carl Eugen From belkevich at mlsdev.com Mon Mar 25 13:12:01 2013 From: belkevich at mlsdev.com (Alexey Belkevich) Date: Mon, 25 Mar 2013 14:12:01 +0200 Subject: [Libav-user] noise on mp3-decoding Message-ID: Hello! I'm building an iOS audio player. I've tested playback in iOS Simulator (i386) and everything was fine. But when I've tested it on iPhone (ARM) there was only noise. Other formats are playing fine. I've done little research, and there what I found: 1) av_read_frame - returns the same data on both device and simulator (I've checked it in packet->data) 2) avcodec_decode_audio4 - returns DIFFERENT data on device and simulator (I've checked it in frame->data) Can it be related to ffmpeg compiling options? Any ideas? -- Alexey Belkevich -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmst at live.com Mon Mar 25 12:35:16 2013 From: cmst at live.com (Dolevo Jay) Date: Mon, 25 Mar 2013 11:35:16 +0000 Subject: [Libav-user] --disable-asm --disable-yasm options Message-ID: Hello all, I am currently using libav decoder in openembedded platform. When configuring the libav, I was forced to use --disable-yasm option because of our old version openembedded system. I realized that I also added --disable-asm option, which I thought that they are the same. Now I see that the libav compiles ./configure --disable-yasm. Questions: 1. Could you please tell me what is the difference between these options? Obviously one of them is for asm optimization and the other for, let say, asm compiler but what is the correlation between them. 2. Do I lose a lot of decoding performance because of --disable-yasm option? Thanks a lot, Regards, Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From mybrokenbeat at gmail.com Mon Mar 25 22:08:41 2013 From: mybrokenbeat at gmail.com (Oleg) Date: Mon, 25 Mar 2013 23:08:41 +0200 Subject: [Libav-user] --disable-asm --disable-yasm options In-Reply-To: References: Message-ID: <14380DDA-D0A6-4743-81D1-F3A874312A3D@gmail.com> 1. disable-yasm disables compiling of yasm files disable-asm disables asm code in *.c files 2. Yes, you will loose lot of performance. Try to install yasm on your embedded system, I'm sure it should be possible somehow if it's linux. 25.03.2013, ? 13:35, Dolevo Jay ???????(?): > Hello all, > > I am currently using libav decoder in openembedded platform. When configuring the libav, I was forced to use --disable-yasm option because of our old version openembedded system. I realized that I also added --disable-asm option, which I thought that they are the same. Now I see that the libav compiles ./configure --disable-yasm. > > Questions: > 1. Could you please tell me what is the difference between these options? Obviously one of them is for asm optimization and the other for, let say, asm compiler but what is the correlation between them. > 2. Do I lose a lot of decoding performance because of --disable-yasm option? > > Thanks a lot, > Regards, > > Jay > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Mon Mar 25 22:36:50 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Mon, 25 Mar 2013 21:36:50 +0000 (UTC) Subject: [Libav-user] --disable-asm --disable-yasm options References: Message-ID: Dolevo Jay writes: > Do I lose a lot of decoding performance because > of --disable-yasm option? If you are compiling for x86 (either 32 or 64bit), you should definitely compile with yasm for performance reasons. Many optimisations have been ported from inline assembly to yasm over the last years. (yasm cannot be used when compiling for other hardware.) --disable-asm is relevant for all (most) platforms, you should never use it except for debugging (as with --disable-yasm), if you have any problems when not using it, please report them either here or on trac. Please understand the project is called "FFmpeg", find the current version at http://ffmpeg.org/download.html Carl Eugen From andrey.krieger.utkin at gmail.com Tue Mar 26 01:31:59 2013 From: andrey.krieger.utkin at gmail.com (Andrey Utkin) Date: Tue, 26 Mar 2013 02:31:59 +0200 Subject: [Libav-user] [HOWTO] How to detect corrupted image files with ffmpeg/libavcodec Message-ID: I have written a very short explanation of error recognition option. English and Russian texts are available. http://blog.krieger.pp.ua/?p=45&lang=en http://blog.krieger.pp.ua/?p=45&lang=ru I hope it will be useful for somebody. Criticism is appreciated, as well as requests for new articles. -- Andrey Utkin From billconan at gmail.com Tue Mar 26 06:53:01 2013 From: billconan at gmail.com (Shi Yan) Date: Mon, 25 Mar 2013 22:53:01 -0700 Subject: [Libav-user] use video codec to stream slides? Message-ID: hello there, I'm prototyping an idea of streaming slides (like powerpoint slides.). The content of a slide is almost static. Once in a few seconds, the presenter can flip the slide to the next one. There could be some animation in a slide, or there could be transition animation between two slides. because I'm not familiar with video codec and the content is mostly static, my first prototype is actually based on png image. Whenever the slides have changed, I capture the current frame, compress it as a png image, and send it to the client via tcp. It works mostly ok, but I could notice some latency when animation happens. I suspect that the png frame is too large. I want to use a video codec to reduce latency. But I have some questions. I looked at the avcodec document as well as the vp8 document. It seems to me that, when I call the encoding function to encode a frame, I can't guarantee to get the encoded result immediately. the codec function can delay the generation of a frame. I don't understand what triggers (flush) frame generation. Does decoding the next frame trigger the generation of the previous frame? In my case, since the content is static, I don't need to call the encode function very often. only when the frame changes. Will that cause frame losing? How does a video codec decide which frame is the key frame? when the content is static, can I always stream key frames? Also, is it easy to change the frame resolution dynamically? for example, I start the video in 1080p, but while I'm streaming it, the stream video resolution changes to 800*600? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From xftzg2013 at 126.com Tue Mar 26 10:33:49 2013 From: xftzg2013 at 126.com (xftzg) Date: Tue, 26 Mar 2013 17:33:49 +0800 (CST) Subject: [Libav-user] How to Frame Step Forwards/Backwards? In-Reply-To: References: <4433ed0c.d0bf.13da0f7ea98.Coremail.xftzg2013@126.com> Message-ID: <41882f00.1cdc7.13da60be07b.Coremail.xftzg2013@126.com> Hi Carl, The following is how I seek a frame backward: AVStream *stream = m_format_ctx->streams[m_video_index]; int64_t time_av_time_base = 1 * AV_TIME_BASE / stream->avg_frame_rate.num; int64_t frame_span = av_rescale_q(time_av_time_base, AV_TIME_BASE_Q, stream->time_base); int64_t timestamp = m_current_pts - m_frame_span; int ok = av_seek_frame(m_format_ctx, m_video_index, timestamp, AVSEEK_FLAG_BACKWARD); avcodec_flush_buffers(m_codec_ctx); If there is no straight way to solve this problem, I'd consider the cache frames method. And do you konw >is there any convenient way to know the index of the frame, which is now playing? Thanks very much. BR xftzg At 2013-03-25 18:42:53,"Carl Eugen Hoyos" wrote: >xftzg writes: > >> I want to perform a 'Step Forwards/Backwards' operation in my player > >This is not a trivial task. >I believe there are two main approaches: >You can cache as many frames as you believe the user will want >to step backwards or you seek backwards to the next keyframe >and decode (again) until the requested frame (this may take >considerable time). >If your problem is that you cannot seek backwards to a keyframe, >please post code that allows to test this. > >Carl Eugen > >_______________________________________________ >Libav-user mailing list >Libav-user at ffmpeg.org >http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From gvelez at vicomtech.org Tue Mar 26 16:39:01 2013 From: gvelez at vicomtech.org (=?ISO-8859-1?Q?Gorka_V=E9lez?=) Date: Tue, 26 Mar 2013 16:39:01 +0100 Subject: [Libav-user] problems cross compiling OpenCV with FFMpeg for ARM Linux Message-ID: Hello all, I am trying to cross compile OpenCV with FFMPEG for ARM Linux, but I get some errors, and I think it's because I don't cross compile correctly FFMPEG, or I'm missing something. First, I have cross compiled FFMPEG using these flags: *./configure --enable-cross-compile --cross-prefix=arm-linux-gnueabi- --cc=arm-linux-gnueabi-gcc --cxx=arm-linux-gnueabi-g++ --arch=arm --target-os=linux --disable-armv5te --disable-armv6 --disable-armv6t2 --enable-libopencv --enable-pic prefix=/home/mypath/ffmpeg_binARM * Then, I have tried to cross compile OpenCV using this help: http://docs.opencv.org/doc/tutorials/introduction/crosscompilation/arm_crosscompile_with_cmake.html * * *cd ~/opencv/platforms/linux* *mkdir -p Build_ARM* *cd Build_ARM* * * *cmake -DSOFTFP=ON -DCMAKE_TOOLCHAIN_FILE=../arm-gnueabi.toolchain.cmake ../../..* ...and I get the following errors when I do "make". Can anyone help me???? Thank you!! * * *...* *[ 34%] Building CXX object modules/highgui/CMakeFiles/opencv_highgui.dir/src/bitstrm.cpp.o* *Linking CXX shared library ../../lib/libopencv_highgui.so* */home/mypath/ffmpeg_binARM/lib/libavformat.a(matroskaenc.o): In function `get_aac_sample_rates':* */home/mypath/ffmpeg/libavformat/matroskaenc.c:460: undefined reference to `avpriv_mpeg4audio_get_config'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(matroskaenc.o): In function `put_xiph_codecpriv':* */home/mypath/ffmpeg/libavformat/matroskaenc.c:440: undefined reference to `avpriv_split_xiph_headers'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(utils.o): In function `has_decode_delay_been_guessed':* */home/mypath/ffmpeg/libavformat/utils.c:916: undefined reference to `avpriv_h264_has_num_reorder_frames'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(utils.o): In function `ff_read_frame_flush':* */home/mypath/ffmpeg/libavformat/utils.c:1624: undefined reference to `av_parser_close'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(utils.o): In function `has_decode_delay_been_guessed':* */home/mypath/ffmpeg/libavformat/utils.c:916: undefined reference to `avpriv_h264_has_num_reorder_frames'* */home/mypath/ffmpeg/libavformat/utils.c:916: undefined reference to `avpriv_h264_has_num_reorder_frames'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(utils.o): In function `parse_packet':* */home/mypath/ffmpeg/libavformat/utils.c:1285: undefined reference to `av_parser_parse2'* */home/mypath/ffmpeg/libavformat/utils.c:1352: undefined reference to `av_parser_close'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(utils.o): In function `read_frame_internal':* */home/mypath/ffmpeg/libavformat/utils.c:1423: undefined reference to `av_parser_init'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(utils.o): In function `avformat_find_stream_info':* */home/mypath/ffmpeg/libavformat/utils.c:2715: undefined reference to `av_parser_init'* */home/mypath/ffmpeg/libavformat/utils.c:2999: undefined reference to `avcodec_pix_fmt_to_codec_tag'* */home/mypath/ffmpeg/libavformat/utils.c:3000: undefined reference to `avpriv_find_pix_fmt'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(utils.o): In function `tb_unreliable':* */home/mypath/ffmpeg/libavformat/utils.c:2674: undefined reference to `ff_raw_pix_fmt_tags'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(utils.o): In function `estimate_timings_from_pts':* */home/mypath/ffmpeg/libavformat/utils.c:2311: undefined reference to `av_parser_close'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(utils.o): In function `ff_free_stream':* */home/mypath/ffmpeg/libavformat/utils.c:3214: undefined reference to `av_parser_close'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(allformats.o): In function `av_register_all':* */home/mypath/ffmpeg/libavformat/allformats.c:60: undefined reference to `avcodec_register_all'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(dv.o): In function `dv_read_header':* */home/mypath/ffmpeg/libavformat/dv.c:511: undefined reference to `avpriv_dv_frame_profile'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(dv.o): In function `avpriv_dv_produce_packet':* */home/mypath/ffmpeg/libavformat/dv.c:364: undefined reference to `avpriv_dv_frame_profile'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(dv.o): In function `dv_frame_offset':* */home/mypath/ffmpeg/libavformat/dv.c:413: undefined reference to `avpriv_dv_codec_profile'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(dvenc.o): In function `dv_init_mux':* */home/mypath/ffmpeg/libavformat/dvenc.c:317: undefined reference to `avpriv_dv_codec_profile'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(flacdec.o): In function `flac_read_header':* */home/mypath/ffmpeg/libavformat/flacdec.c:178: undefined reference to `avpriv_flac_parse_block_header'* */home/mypath/ffmpeg/libavformat/flacdec.c:216: undefined reference to `avpriv_flac_parse_streaminfo'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(flacenc.o): In function `flac_write_trailer':* */home/mypath/ffmpeg/libavformat/flacenc.c:106: undefined reference to `avpriv_flac_is_extradata_valid'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(flacenc_header.o): In function `ff_flac_write_header':* */home/mypath/ffmpeg/libavformat/flacenc_header.c:37: undefined reference to `avpriv_flac_is_extradata_valid'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(frmdec.o): In function `frm_read_header':* */home/mypath/ffmpeg/libavformat/frmdec.c:64: undefined reference to `avpriv_find_pix_fmt'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(isom.o): In function `ff_mp4_read_dec_config_descr':* */home/mypath/ffmpeg/libavformat/isom.c:460: undefined reference to `avpriv_mpeg4audio_get_config'* */home/mypath/ffmpeg/libavformat/isom.c:442: undefined reference to `avpriv_mpa_freq_tab'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(latmenc.o): In function `latm_decode_extradata':* */home/mypath/ffmpeg/libavformat/latmenc.c:63: undefined reference to `avpriv_mpeg4audio_get_config'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(latmenc.o): In function `latm_write_packet':* */home/mypath/ffmpeg/libavformat/latmenc.c:196: undefined reference to `avpriv_copy_bits'* */home/mypath/ffmpeg/libavformat/latmenc.c:198: undefined reference to `avpriv_align_put_bits'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(latmenc.o): In function `latm_write_frame_header':* */home/mypath/ffmpeg/libavformat/latmenc.c:123: undefined reference to `avpriv_copy_bits'* */home/mypath/ffmpeg/libavformat/latmenc.c:129: undefined reference to `avpriv_copy_pce_data'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(latmenc.o): In function `latm_write_packet':* */home/mypath/ffmpeg/libavformat/latmenc.c:194: undefined reference to `avpriv_copy_bits'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(latmenc.o): In function `latm_write_frame_header':* */home/mypath/ffmpeg/libavformat/latmenc.c:119: undefined reference to `avpriv_copy_bits'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(matroskadec.o): In function `matroska_read_header':* */home/mypath/ffmpeg/libavformat/matroskadec.c:1818: undefined reference to `avpriv_mpeg4audio_sample_rates'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(mov.o): In function `mov_read_dac3':* */home/mypath/ffmpeg/libavformat/mov.c:652: undefined reference to `avpriv_ac3_channel_layout_tab'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(mov.o): In function `mov_read_dec3':* */home/mypath/ffmpeg/libavformat/mov.c:679: undefined reference to `avpriv_ac3_channel_layout_tab'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(mp3dec.o): In function `check':* */home/mypath/ffmpeg/libavformat/mp3dec.c:268: undefined reference to `avpriv_mpegaudio_decode_header'* */home/mypath/ffmpeg/libavformat/mp3dec.c:268: undefined reference to `avpriv_mpegaudio_decode_header'* */home/mypath/ffmpeg/libavformat/mp3dec.c:268: undefined reference to `avpriv_mpegaudio_decode_header'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(mp3dec.o): In function `mp3_parse_vbr_tags':* */home/mypath/ffmpeg/libavformat/mp3dec.c:129: undefined reference to `avpriv_mpegaudio_decode_header'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(mp3dec.o): In function `mp3_read_probe':* */home/mypath/ffmpeg/libavformat/mp3dec.c:68: undefined reference to `avpriv_mpa_decode_header'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(mp3enc.o): In function `mp3_write_audio_packet':* */home/mypath/ffmpeg/libavformat/mp3enc.c:272: undefined reference to `avpriv_mpegaudio_decode_header'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(mp3enc.o): In function `mp3_write_xing':* */home/mypath/ffmpeg/libavformat/mp3enc.c:167: undefined reference to `avpriv_mpegaudio_decode_header'* */home/mypath/ffmpeg/libavformat/mp3enc.c:167: undefined reference to `avpriv_mpegaudio_decode_header'* */home/mypath/ffmpeg/libavformat/mp3enc.c:167: undefined reference to `avpriv_mpegaudio_decode_header'* */home/mypath/ffmpeg/libavformat/mp3enc.c:167: undefined reference to `avpriv_mpegaudio_decode_header'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(mp3enc.o):/home/mypath/ffmpeg/libavformat/mp3enc.c:167: more undefined references to `avpriv_mpegaudio_decode_header' follow* */home/mypath/ffmpeg_binARM/lib/libavformat.a(mp3enc.o): In function `mp3_write_xing':* */home/mypath/ffmpeg/libavformat/mp3enc.c:174: undefined reference to `avpriv_mpa_freq_tab'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(mpegtsenc.o): In function `mpegts_write_packet_internal':* */home/mypath/ffmpeg/libavformat/mpegtsenc.c:1100: undefined reference to `avpriv_mpv_find_start_code'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(mxfenc.o): In function `mxf_parse_dnxhd_frame':* */home/mypath/ffmpeg/libavformat/mxfenc.c:1415: undefined reference to `avpriv_dnxhd_get_frame_size'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(nutenc.o): In function `find_expected_header':* */home/mypath/ffmpeg/libavformat/nutenc.c:64: undefined reference to `avpriv_mpa_freq_tab'* */home/mypath/ffmpeg/libavformat/nutenc.c:64: undefined reference to `avpriv_mpa_bitrate_tab'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(oggenc.o): In function `ogg_write_header':* */home/mypath/ffmpeg/libavformat/oggenc.c:484: undefined reference to `avpriv_split_xiph_headers'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(oggenc.o): In function `ogg_build_flac_headers':* */home/mypath/ffmpeg/libavformat/oggenc.c:314: undefined reference to `avpriv_flac_is_extradata_valid'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(oggparsedirac.o): In function `dirac_header':* */home/mypath/ffmpeg/libavformat/oggparsedirac.c:40: undefined reference to `avpriv_dirac_parse_sequence_header'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(oggparseflac.o): In function `flac_header':* */home/mypath/ffmpeg/libavformat/oggparseflac.c:59: undefined reference to `avpriv_flac_parse_streaminfo'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(oggparsevorbis.o): In function `vorbis_packet':* */home/mypath/ffmpeg/libavformat/oggparsevorbis.c:360: undefined reference to `avpriv_vorbis_parse_frame'* */home/mypath/ffmpeg/libavformat/oggparsevorbis.c:326: undefined reference to `avpriv_vorbis_parse_reset'* */home/mypath/ffmpeg/libavformat/oggparsevorbis.c:329: undefined reference to `avpriv_vorbis_parse_frame'* */home/mypath/ffmpeg/libavformat/oggparsevorbis.c:338: undefined reference to `avpriv_vorbis_parse_frame'* */home/mypath/ffmpeg/libavformat/oggparsevorbis.c:355: undefined reference to `avpriv_vorbis_parse_reset'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(oggparsevorbis.o): In function `vorbis_header':* */home/mypath/ffmpeg/libavformat/oggparsevorbis.c:300: undefined reference to `avpriv_vorbis_parse_extradata'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(rtpdec_jpeg.o): In function `jpeg_parse_packet':* */home/mypath/ffmpeg/libavformat/rtpdec_jpeg.c:302: undefined reference to `avpriv_mjpeg_bits_dc_luminance'* */home/mypath/ffmpeg/libavformat/rtpdec_jpeg.c:302: undefined reference to `avpriv_mjpeg_val_dc'* */home/mypath/ffmpeg/libavformat/rtpdec_jpeg.c:302: undefined reference to `avpriv_mjpeg_bits_dc_chrominance'* */home/mypath/ffmpeg/libavformat/rtpdec_jpeg.c:302: undefined reference to `avpriv_mjpeg_bits_ac_luminance'* */home/mypath/ffmpeg/libavformat/rtpdec_jpeg.c:302: undefined reference to `avpriv_mjpeg_val_ac_luminance'* */home/mypath/ffmpeg/libavformat/rtpdec_jpeg.c:302: undefined reference to `avpriv_mjpeg_bits_ac_chrominance'* */home/mypath/ffmpeg/libavformat/rtpdec_jpeg.c:302: undefined reference to `avpriv_mjpeg_val_ac_chrominance'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(rtpenc_mpv.o): In function `ff_rtp_send_mpegvideo':* */home/mypath/ffmpeg/libavformat/rtpenc_mpv.c:59: undefined reference to `avpriv_mpv_find_start_code'* */home/mypath/ffmpeg/libavformat/rtpenc_mpv.c:59: undefined reference to `avpriv_mpv_find_start_code'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(sdp.o): In function `xiph_extradata2config':* */home/mypath/ffmpeg/libavformat/sdp.c:283: undefined reference to `avpriv_split_xiph_headers'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(sdp.o): In function `extradata2psets':* */home/mypath/ffmpeg/libavformat/sdp.c:171: undefined reference to `av_bitstream_filter_init'* */home/mypath/ffmpeg/libavformat/sdp.c:187: undefined reference to `av_bitstream_filter_filter'* */home/mypath/ffmpeg/libavformat/sdp.c:188: undefined reference to `av_bitstream_filter_close'* */home/mypath/ffmpeg/libavformat/sdp.c:183: undefined reference to `av_bitstream_filter_close'* */home/mypath/ffmpeg/libavformat/sdp.c:174: undefined reference to `avpriv_mpeg4audio_sample_rates'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(spdifdec.o): In function `spdif_get_offset_and_codec':* */home/mypath/ffmpeg/libavformat/spdifdec.c:60: undefined reference to `avpriv_aac_parse_header'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(spdifenc.o): In function `spdif_header_dts4':* */home/mypath/ffmpeg/libavformat/spdifenc.c:174: undefined reference to `avpriv_dca_sample_rates'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(spdifenc.o): In function `spdif_header_aac':* */home/mypath/ffmpeg/libavformat/spdifenc.c:354: undefined reference to `avpriv_aac_parse_header'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(takdec.o): In function `tak_read_header':* */home/mypath/ffmpeg/libavformat/takdec.c:119: undefined reference to `avpriv_tak_parse_streaminfo'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(ac3dec.o): In function `ac3_eac3_probe':* */home/mypath/ffmpeg/libavformat/ac3dec.c:58: undefined reference to `avpriv_ac3_parse_header'* */home/mypath/ffmpeg/libavformat/ac3dec.c:58: undefined reference to `avpriv_ac3_parse_header'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(adtsenc.o): In function `adts_decode_extradata':* */home/mypath/ffmpeg/libavformat/adtsenc.c:50: undefined reference to `avpriv_mpeg4audio_get_config'* */home/mypath/ffmpeg/libavformat/adtsenc.c:82: undefined reference to `avpriv_copy_pce_data'* */home/mypath/ffmpeg_binARM/lib/libavformat.a(adxdec.o): In function `adx_read_header':* */home/mypath/ffmpeg/libavformat/adxdec.c:90: undefined reference to `avpriv_adx_decode_header'* *collect2: ld returned 1 exit status* *make[2]: *** [lib/libopencv_highgui.so.2.4.9] Error 1* *make[1]: *** [modules/highgui/CMakeFiles/opencv_highgui.dir/all] Error 2* *make: *** [all] Error 2* -- Dr.-Ing. Gorka V?lez Isasmendi Investigador Colaborador / Contributing ResearcherSistemas de transporte inteligentes e Ingenier?a / Intelligent Transport Systems and Engineering Donostia - San Sebasti?n - Spain Tel: +[34] 943 30 92 30gvelez at vicomtech.org Aviso Legal - Pol?tica de privacidad / Lege Oharra - Pribatutasun politika / Legal Notice - Privacy policy -------------- next part -------------- An HTML attachment was scrubbed... URL: From belkevich at mlsdev.com Tue Mar 26 17:00:33 2013 From: belkevich at mlsdev.com (Alexey Belkevich) Date: Tue, 26 Mar 2013 18:00:33 +0200 Subject: [Libav-user] noise on mp3-decoding In-Reply-To: References: Message-ID: Also, I found this note in avcodec_decode_audio4 description You might have to align the input buffer. The alignment requirements depend on the CPU and the decoder (http://ffmpeg.org/doxygen/trunk/group__lavc__decoding.html#ga834bb1b062fbcc2de4cf7fb93f154a3e) Is anyone know what it means? And how I can align buffer? > Hello! > I'm building an iOS audio player. I've tested playback in iOS Simulator (i386) and everything was fine. But when I've tested it on iPhone (ARM) there was only noise. Other formats are playing fine. > I've done little research, and there what I found: > 1) av_read_frame - returns the same data on both device and simulator (I've checked it in packet->data) > 2) avcodec_decode_audio4 - returns DIFFERENT data on device and simulator (I've checked it in frame->data) > > Can it be related to ffmpeg compiling options? > Any ideas? > > -- > Alexey Belkevich > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brado at bighillsoftware.com Tue Mar 26 18:17:59 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Tue, 26 Mar 2013 10:17:59 -0700 Subject: [Libav-user] Sample format questions Message-ID: I have four questions: 1. Is there a function out there to return the proper sample format based upon whether a sample is linear / planar, signed/unsigned, and byte size? In samplefmt.h, it seems there are functions to retrieve the sample format by name, or switch between linear and planar formats, but no functions to determine which sample format should be used based upon sample buffer attributes. Does such a function exist elsewhere? 2. To confirm, are all linear sample formats with more than one channel interleaved by default? In other words, must all linear sample format data be interleaved as such (pseudocode): ch1[0] -> ch2[0] -> ch1[1] -> ch2[2] -> ... -> ch1[n] -> ch2[n] 3. Is endianness in every sample format native endian, meaning, for example, that if I have a float in a captured sample buffer which is native endian it will require no conversion to a sample format endianness? 4. Is there a function that will output the endianness for a sample format at runtime? I read what the documentation says, but having been at a brick wall with processing captured audio, I need verification that the sample format I'm converting into is indeed apples:apples. Thanks, Brad From onemda at gmail.com Tue Mar 26 18:29:22 2013 From: onemda at gmail.com (Paul B Mahol) Date: Tue, 26 Mar 2013 17:29:22 +0000 Subject: [Libav-user] Sample format questions In-Reply-To: References: Message-ID: On 3/26/13, Brad O'Hearne wrote: > I have four questions: > > 1. Is there a function out there to return the proper sample format based > upon whether a sample is linear / planar, signed/unsigned, and byte size? In > samplefmt.h, it seems there are functions to retrieve the sample format by > name, or switch between linear and planar formats, but no functions to > determine which sample format should be used based upon sample buffer > attributes. Does such a function exist elsewhere? sample format is set by decoder to what sample format decoder outputs its data, similar apply for encoder input. > > 2. To confirm, are all linear sample formats with more than one channel > interleaved by default? In other words, must all linear sample format data > be interleaved as such (pseudocode): > > ch1[0] -> ch2[0] -> ch1[1] -> ch2[2] -> ... -> ch1[n] -> ch2[n] any planar sample format with more than 1 channel is not interleaved. > > 3. Is endianness in every sample format native endian, meaning, for example, > that if I have a float in a captured sample buffer which is native endian it > will require no conversion to a sample format endianness? Yes, it is required that data be in native endian order. > > 4. Is there a function that will output the endianness for a sample format > at runtime? I read what the documentation says, but having been at a brick > wall with processing captured audio, I need verification that the sample > format I'm converting into is indeed apples:apples. As already mentioned data is always in native endian, its decoder responsibility.... > > Thanks, > > Brad > > > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > From brado at bighillsoftware.com Tue Mar 26 19:02:40 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Tue, 26 Mar 2013 11:02:40 -0700 Subject: [Libav-user] Sample format questions In-Reply-To: References: Message-ID: On Mar 26, 2013, at 10:29 AM, Paul B Mahol wrote: Thx for the reply! > sample format is set by decoder to what sample format decoder outputs its data, > similar apply for encoder input. Yes, I'm actually encoding, but point taken. Sample format is by the encoder, true. However, in my scenario I have an intermediate step prior to getting to the encoder. I am receiving a sample buffer from QTKit capture which is in a different sample format than the encoder is looking for. In my case, the audio encoder is adpm_swf (I am encoding video and audio to an FLV video file). adpm_swf is looking for a sample format of AV_SAMPLE_FMT_S16, which is not the format being received from capture. The format I am receiving from capture is Linear PCM, 32 bit little-endian floating point, 2 channels, 44100 So I have to resample the decompressed audio sample received from capture so that it is in a format of AV_SAMPLE_FMT_S16 before passing it to the encoder. Coming in from QTKit, it obviously knows nothing about Libav constants. I have all of the necessary attributes on the sample buffer, which I want to use to have Libav deliver the proper sample format without me hardcoding it (which it is now, I'm using AV_SAMPLE_FMT_FLT). However, for some reason, the audio when encoded is garbage, so I'm trying to verify the entire processing pipeline, eliminating assumptions. One assumption I've made is the sample format, another is the endianness -- I'm interested in functions to return each. Does such a function exist for returning what sample format should be used, given linear/planar, bytes/endianness, channels, etc.? > any planar sample format with more than 1 channel is not interleaved. Does it follow then that any linear format with more than one channel IS interleaved? > As already mentioned data is always in native endian, its decoder > responsibility.... Is there a function that returns what endianness the sample format is presently using (and I'm not looking for a function or doc that specifies something along the lines of "native" here -- I'm looking for the specific endianness so that I can verify that it is indeed native. Thanks, Brad From cmst at live.com Tue Mar 26 09:20:24 2013 From: cmst at live.com (Dolevo Jay) Date: Tue, 26 Mar 2013 08:20:24 +0000 Subject: [Libav-user] --disable-asm --disable-yasm options In-Reply-To: References: , Message-ID: > To: libav-user at ffmpeg.org > From: cehoyos at ag.or.at > Date: Mon, 25 Mar 2013 21:36:50 +0000 > Subject: Re: [Libav-user] --disable-asm --disable-yasm options > > Dolevo Jay writes: > > > Do I lose a lot of decoding performance because > > of --disable-yasm option? > > If you are compiling for x86 (either 32 or 64bit), you > should definitely compile with yasm for performance > reasons. Many optimisations have been ported from > inline assembly to yasm over the last years. > (yasm cannot be used when compiling for other hardware.) > > --disable-asm is relevant for all (most) platforms, you > should never use it except for debugging (as with > --disable-yasm), if you have any problems when not using > it, please report them either here or on trac. > > Please understand the project is called "FFmpeg", find > the current version at http://ffmpeg.org/download.html > > Carl Eugen > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user Hello, Thank you very much for your inputs. When I don't use --disable-yasm option, I got tons of errors just because the yasm version that we are using in openembedded is quite old and ffmpeg requires a newer version of yasm. Updating yasm is fairly easy but then I got multiple of tons of different errors due to compatibility issues between different packages that are used in openembedded. I guess there is no option except updating the whole openembedded which is way beyond my knowledge. Thanks a lot. Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From onemda at gmail.com Tue Mar 26 19:25:19 2013 From: onemda at gmail.com (Paul B Mahol) Date: Tue, 26 Mar 2013 18:25:19 +0000 Subject: [Libav-user] Sample format questions In-Reply-To: References: Message-ID: On 3/26/13, Brad O'Hearne wrote: > On Mar 26, 2013, at 10:29 AM, Paul B Mahol wrote: > > Thx for the reply! > >> sample format is set by decoder to what sample format decoder outputs its >> data, >> similar apply for encoder input. > > Yes, I'm actually encoding, but point taken. Sample format is by the > encoder, true. However, in my scenario I have an intermediate step prior to > getting to the encoder. I am receiving a sample buffer from QTKit capture > which is in a different sample format than the encoder is looking for. In my > case, the audio encoder is adpm_swf (I am encoding video and audio to an FLV > video file). adpm_swf is looking for a sample format of AV_SAMPLE_FMT_S16, > which is not the format being received from capture. The format I am > receiving from capture is > > Linear PCM, 32 bit little-endian floating point, 2 channels, 44100 Unless you are on big endian, FLT sample format should be fine. Otherwise you need to swap each 4 bytes. > > So I have to resample the decompressed audio sample received from capture so > that it is in a format of AV_SAMPLE_FMT_S16 before passing it to the > encoder. Coming in from QTKit, it obviously knows nothing about Libav > constants. I have all of the necessary attributes on the sample buffer, > which I want to use to have Libav deliver the proper sample format without > me hardcoding it (which it is now, I'm using AV_SAMPLE_FMT_FLT). However, > for some reason, the audio when encoded is garbage, so I'm trying to verify > the entire processing pipeline, eliminating assumptions. One assumption I've > made is the sample format, another is the endianness -- I'm interested in > functions to return each. Does such a function exist for returning what > sample format should be used, given linear/planar, bytes/endianness, > channels, etc.? > >> any planar sample format with more than 1 channel is not interleaved. > > Does it follow then that any linear format with more than one channel IS > interleaved? Probably, I never worked with QTKit capture so I dunno. It could return raw data or raw data with some headers. > >> As already mentioned data is always in native endian, its decoder >> responsibility.... > > Is there a function that returns what endianness the sample format is > presently using (and I'm not looking for a function or doc that specifies > something along the lines of "native" here -- I'm looking for the specific > endianness so that I can verify that it is indeed native. It is always native, eg. decoder gives native, encoder expects native input. > > Thanks, > > Brad > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > From brado at bighillsoftware.com Tue Mar 26 20:14:20 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Tue, 26 Mar 2013 12:14:20 -0700 Subject: [Libav-user] libswresample, packed samples and alignment Message-ID: <5B7F649F-356B-43D2-9C26-83E9C3CE3E63@bighillsoftware.com> Hello, I've noticed that several functions in samplefmt.h take an "align" parameter, such as the function calls: int av_samples_alloc(uint8_t **audio_data, int *linesize, int nb_channels, int nb_samples, enum AVSampleFormat sample_fmt, int align); int av_samples_get_buffer_size(int *linesize, int nb_channels, int nb_samples, enum AVSampleFormat sample_fmt, int align); int av_samples_fill_arrays(uint8_t **audio_data, int *linesize, const uint8_t *buf, int nb_channels, int nb_samples, enum AVSampleFormat sample_fmt, int align); This align parameter has the following description: * @param align buffer size alignment (0 = default, 1 = no alignment) I want to make sure that I'm properly understanding the purpose and setting of this parameter. As I understand it, a sample is "packed" if its sample bits occupy the entire available bits for the channel. If a sample's bits do not occupy the entire available bits for the channel it is not packed, and then the data is either high or low-aligned within the channel. In the case of my app, my sample format of captured audio is: Linear PCM, 32 bit little-endian floating point, 2 channels, 44100 Hz and this data IS indeed packed, meaning that there is neither high nor low alignment. In setting the appropriate align value for the aforementioned functions, I have two questions: 1. What is "default" alignment according to the documentation? Is that high or low, or something else? 2. Based on my captured sample data being packed, shouldn't this mean that there is NO alignment, and therefore the value for these method invocations be 1? Thanks, Brad From coderroadie at gmail.com Tue Mar 26 21:07:20 2013 From: coderroadie at gmail.com (Richard Schilling) Date: Tue, 26 Mar 2013 13:07:20 -0700 Subject: [Libav-user] creating a new filter... Message-ID: <321B8C36-D2F3-46E4-ACE5-F2D8D1E5ABC6@gmail.com> Greetings. This is my first post. I looked in the listserv archives but didn't find anyone talking about this, so here it goes. I need to implement a new audio (audio only) filter. I see the example code in filtering_audio.c that uses a buffer sink. But, I'm having a hard time finding particulars on what everything means. So, I'm hoping someone on this list can help. My goal: I have an application that uses the FFMPG library. I need to create a new custom audio filter, say my_filter.c. filtering_audio.c looks like the place to start. Is that correct? In filtering_audio.c: * Can I get a basic walk-through of the code in the function init_filters? it looks like there is a source (AVFilter *abuffersrc) and sink buffer (AVFilter *abuffersing). This looks like two filters. How do I install just one filter? I'm looking for a basic checklist here so I know that my calls to avfilter_graph_create_filter, av_filtergraph_parse, etc ? are correct ? .basically a walk-through of the example code. * In avfilter_asink_abuffer (buffersink.c) I see .inputs and .outputs defined. .inputs defines an AVFilterPad called "default". .outputs defines no (NULL) filter pads. How does this relate to this code in filtering_audio.c? /* Endpoints for the filter graph. */ outputs->name = av_strdup("in"); outputs->filter_ctx = buffersrc_ctx; outputs->pad_idx = 0; outputs->next = NULL; inputs->name = av_strdup("out"); inputs->filter_ctx = buffersink_ctx; inputs->pad_idx = 0; inputs->next = NULL; Thank you in advance. Cheers, Richard Schilling -------------- next part -------------- An HTML attachment was scrubbed... URL: From brado at bighillsoftware.com Wed Mar 27 04:50:08 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Tue, 26 Mar 2013 20:50:08 -0700 Subject: [Libav-user] QTKit -> Libav: has it ever been done? Message-ID: <37AD1A99-1816-4546-BCBE-B363FB840377@bighillsoftware.com> In lieu of having no luck over several weeks getting video + audio samples captured from QTKit resampled and encoded with Libav to FLV (with video), I've kind of hit a bit of a brick wall. The runnable Mac app and source demonstrating this use case hasn't apparently shed any light on why the audio being encoded is junk. While my gut all along has been that there's something relatively simple at work, perhaps a pointer problem or something, at the same time I've been over this fairly modest section of code so many times I'm not sure what more to try. Again, my use case: QTKit capture (audio / video) -> convert (video)/resample (audio) -> encode to FLV -> output (file or network stream) Up to this point, I've assumed all of the problem was in my code. I still don't doubt that is likely the case, but given no headway looking for the problem there, or to whatever degree anyone else has taken a look at the sample app I provided it also hasn't rendered any real headway, I started looking in other media libraries which either depend or appear to partially depend on Libav. Basically what appears to be the case is that they don't use Libav on the other side of QTKit capture, so as of now, I am not aware of any example which is publicly available demonstrating that this has been done. So at this point, it seems a decent question to consider if maybe the problem lies in Libav. I think there's a pretty simple question to ask: has a QTSampleBuffer ever been used to fill a sample array, resampled, and encoded to FLV? -- i.e. does anyone actually know if this works? If so, I would be much obliged if you could direct me to the unit test or code example which demonstrates this, and I should be able to figure out what deficiency exists in my code. If such a unit test or code example does *not* exist, then I guess a question for the maintainers -- what do you think the chances are that there's a bug somewhere in either resampling or the adpcm_swf codec that could be affecting audio? Thanks, I greatly appreciate it. Brad From brado at bighillsoftware.com Wed Mar 27 04:58:54 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Tue, 26 Mar 2013 20:58:54 -0700 Subject: [Libav-user] Libav, licensing, h.264 Message-ID: I have a need for processing of h.264 video in a commercial (for sale) product. I am aware that there are more or less two flavors of FFmpeg, one with GPL'd code compiled in, and another with GPL'd code disabled from compilation. In general, GPL governs the former, while LGPL governs the latter. I'm not aware of the exact functional boundaries of each, but I am aware that libx264 is apparently GPL'd, so that is a no-op for any commercial product. My question is this: is there *any* h.264 processing capability within Libav that does not fall under GPL, that is usable in a commercial product, and if so what are its limitations? Can Libav be used to handle h.264 processing in a commercial product (obviously without libx264), or no? Thanks for the clarification. Cheers, Brad From ubitux at gmail.com Wed Mar 27 05:04:00 2013 From: ubitux at gmail.com (=?utf-8?B?Q2zDqW1lbnQgQsWTc2No?=) Date: Wed, 27 Mar 2013 05:04:00 +0100 Subject: [Libav-user] Libav, licensing, h.264 In-Reply-To: References: Message-ID: <20130327040400.GA3758@leki> On Tue, Mar 26, 2013 at 08:58:54PM -0700, Brad O'Hearne wrote: > I have a need for processing of h.264 video in a commercial (for sale) product. I am aware that there are more or less two flavors of FFmpeg, one with GPL'd code compiled in, and another with GPL'd code disabled from compilation. In general, GPL governs the former, while LGPL governs the latter. I'm not aware of the exact functional boundaries of each, but I am aware that libx264 is apparently GPL'd, so that is a no-op for any commercial product. > Yes, a detail of what is under GPL is available here: http://git.videolan.org/?p=ffmpeg.git;a=blob;f=LICENSE;hb=HEAD > My question is this: is there *any* h.264 processing capability within Libav that does not fall under GPL, that is usable in a commercial product, and if so what are its limitations? Can Libav be used to handle h.264 processing in a commercial product (obviously without libx264), or no? > The h264 decoder is under LGPL. [...] Note: the project is FFmpeg, not Libav. -- Cl?ment B. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 490 bytes Desc: not available URL: From brado at bighillsoftware.com Wed Mar 27 05:19:57 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Tue, 26 Mar 2013 21:19:57 -0700 Subject: [Libav-user] Libav, licensing, h.264 In-Reply-To: <20130327040400.GA3758@leki> References: <20130327040400.GA3758@leki> Message-ID: <7FC2D231-486D-4896-A77F-669D9B1CDF74@bighillsoftware.com> On Mar 26, 2013, at 9:04 PM, Cl?ment B?sch wrote: Thanks for the reply, Clement! > The h264 decoder is under LGPL. I need not only the decoder, but the encoder as well. I read the link sent...I wasn't clear on it -- is the encoder under GPL? If so, do you have any idea what other commercial outfits are using the h.264 encoding? Surely they aren't all rolling their own h.264 codec... > Note: the project is FFmpeg, not Libav. Apologies, between the project history someone posted a week ago, documentation, library naming conventions, the mailing list names, and that Googling info is nearly always most useful searching on "Libav" rather than "FFmpeg", (not to mention having looked at code for WAY too long in the past several days), I mixed them up. My heart was in the right place though...sorry about that. :-) Brad From ubitux at gmail.com Wed Mar 27 05:25:27 2013 From: ubitux at gmail.com (=?utf-8?B?Q2zDqW1lbnQgQsWTc2No?=) Date: Wed, 27 Mar 2013 05:25:27 +0100 Subject: [Libav-user] Libav, licensing, h.264 In-Reply-To: <7FC2D231-486D-4896-A77F-669D9B1CDF74@bighillsoftware.com> References: <20130327040400.GA3758@leki> <7FC2D231-486D-4896-A77F-669D9B1CDF74@bighillsoftware.com> Message-ID: <20130327042527.GB3758@leki> On Tue, Mar 26, 2013 at 09:19:57PM -0700, Brad O'Hearne wrote: > On Mar 26, 2013, at 9:04 PM, Cl?ment B?sch wrote: > > Thanks for the reply, Clement! > > > The h264 decoder is under LGPL. > > I need not only the decoder, but the encoder as well. I read the link sent...I wasn't clear on it -- is the encoder under GPL? If so, do you have any idea what other commercial outfits are using the h.264 encoding? Surely they aren't all rolling their own h.264 codec... > There is no native h264 encoder in FFmpeg. A long time ago there was an experimental one but it was dropped because libx264 was maintained and by far superior. The x264 project offers some special licencing, you might want to ask them directly. > > Note: the project is FFmpeg, not Libav. > > Apologies, between the project history someone posted a week ago, documentation, library naming conventions, the mailing list names, and that Googling info is nearly always most useful searching on "Libav" rather than "FFmpeg", (not to mention having looked at code for WAY too long in the past several days), I mixed them up. My heart was in the right place though...sorry about that. > The fork took a confusing name on purpose. Basically, anything @ffmpeg.org is FFmpeg. -- Cl?ment B. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 490 bytes Desc: not available URL: From nkipe at tatapowersed.com Wed Mar 27 09:12:05 2013 From: nkipe at tatapowersed.com (Navin) Date: Wed, 27 Mar 2013 13:42:05 +0530 Subject: [Libav-user] Libav, licensing, h.264 In-Reply-To: <20130327042527.GB3758@leki> References: <20130327040400.GA3758@leki> <7FC2D231-486D-4896-A77F-669D9B1CDF74@bighillsoftware.com> <20130327042527.GB3758@leki> Message-ID: <5152A9D5.5050704@tatapowersed.com> As I understand, there shouldn't be any licensing issues if one uses ffmpeg via the ffmpeg DLL's. As provided by Zeranoe in the ffmpeg-win32-shared builds http://ffmpeg.zeranoe.com/builds/ Nav On 3/27/2013 9:55 AM, Cl?ment Boesch wrote: > On Tue, Mar 26, 2013 at 09:19:57PM -0700, Brad O'Hearne wrote: >> On Mar 26, 2013, at 9:04 PM, Cl?ment Boesch wrote: >> >> Thanks for the reply, Clement! >> >>> The h264 decoder is under LGPL. >> I need not only the decoder, but the encoder as well. I read the link sent...I wasn't clear on it -- is the encoder under GPL? If so, do you have any idea what other commercial outfits are using the h.264 encoding? Surely they aren't all rolling their own h.264 codec... >> > There is no native h264 encoder in FFmpeg. A long time ago there was an > experimental one but it was dropped because libx264 was maintained and by > far superior. > > The x264 project offers some special licencing, you might want to ask them > directly. > >>> Note: the project is FFmpeg, not Libav. >> Apologies, between the project history someone posted a week ago, documentation, library naming conventions, the mailing list names, and that Googling info is nearly always most useful searching on "Libav" rather than "FFmpeg", (not to mention having looked at code for WAY too long in the past several days), I mixed them up. My heart was in the right place though...sorry about that. >> > The fork took a confusing name on purpose. Basically, anything @ffmpeg.org > is FFmpeg. > > > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Wed Mar 27 09:48:43 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Wed, 27 Mar 2013 08:48:43 +0000 (UTC) Subject: [Libav-user] Libav, licensing, h.264 References: <20130327040400.GA3758@leki> <7FC2D231-486D-4896-A77F-669D9B1CDF74@bighillsoftware.com> <20130327042527.GB3758@leki> <5152A9D5.5050704@tatapowersed.com> Message-ID: Navin writes: > As I understand, there shouldn't be any licensing issues if one uses > ffmpeg via the ffmpeg DLL's. As provided by Zeranoe in the > ffmpeg-win32-shared builds http://ffmpeg.zeranoe.com/builds/Nav Why should that be? Ie, if a license issue applies to the FFmpeg project (in this case a license issue that applies to one of the libraries Brad wants to use, namely libx264 which is - under normal circumstances - available under the terms of the GPL, which makes FFmpeg linked to libx264 automatically GPL) not apply to the Zeranoe builds that are made from FFmpeg sources? Please do not top-post here. Carl Eugen From cehoyos at ag.or.at Wed Mar 27 09:58:31 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Wed, 27 Mar 2013 08:58:31 +0000 (UTC) Subject: [Libav-user] QTKit -> Libav: has it ever been done? References: <37AD1A99-1816-4546-BCBE-B363FB840377@bighillsoftware.com> Message-ID: Brad O'Hearne writes: > In lieu of having no luck over several weeks getting video > + audio samples captured from QTKit resampled and encoded > with Libav to FLV (with video), I've kind of hit a bit of > a brick wall. The runnable Mac app and source > demonstrating this use case hasn't apparently shed any > light on why the audio being encoded is junk. The reason is probably that most developers do not own OSX hardware and therefore cannot test your code. Did you already try to produce a test-case that does not use QTKit? There is definitely a native decoder that outputs the format that (you believe) QTKit offers, and that code would not be OSX specific in the end. You could start with one of the examples in doc/examples and change it to your needs. Carl Eugen From krueger at lesspain.de Wed Mar 27 10:09:37 2013 From: krueger at lesspain.de (=?UTF-8?Q?Robert_Kr=C3=BCger?=) Date: Wed, 27 Mar 2013 10:09:37 +0100 Subject: [Libav-user] Libav, licensing, h.264 In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 4:58 AM, Brad O'Hearne wrote: > I have a need for processing of h.264 video in a commercial (for sale) product. I am aware that there are more or less two flavors of FFmpeg, one with GPL'd code compiled in, and another with GPL'd code disabled from compilation. In general, GPL governs the former, while LGPL governs the latter. I'm not aware of the exact functional boundaries of each, but I am aware that libx264 is apparently GPL'd, so that is a no-op for any commercial product. > > My question is this: is there *any* h.264 processing capability within Libav that does not fall under GPL, that is usable in a commercial product, and if so what are its limitations? Can Libav be used to handle h.264 processing in a commercial product (obviously without libx264), or no? >From the x264 website: In addition to being free to use under the GNU GPL, x264 is also available under a commercial license from x264 LLC and CoreCodec. Contact info at x264licensing.com for more details. If you need h264 (software) encoding in ffmpeg libs in a commercial product and you don't want to distribute the source of your application, this is probably your only legal choice (IANAL). From rjvbertin at gmail.com Wed Mar 27 10:18:39 2013 From: rjvbertin at gmail.com (=?ISO-8859-1?Q?Ren=E9_J=2EV=2E_Bertin?=) Date: Wed, 27 Mar 2013 10:18:39 +0100 Subject: [Libav-user] QTKit -> Libav: has it ever been done? In-Reply-To: References: <37AD1A99-1816-4546-BCBE-B363FB840377@bighillsoftware.com> Message-ID: On 27 March 2013 09:58, Carl Eugen Hoyos wrote: > not use QTKit? There is definitely a native decoder > that outputs the format that (you believe) QTKit > offers, and that code would not be OSX specific in > the end. Are you saying there's a decoder that outputs the decoded content in QTSampleBuffer format, tested to be accepted as input by QTKit? Even if that's the case (and I'm not doubting your word on it), that still doesn't guarantee that those buffers are filled with the same kind of data. I recall that Brad's problem is with audio, and the symptoms suggest that there is some kind of misinterpretation of the soundbytes. It could be as simple as a disagreement on endianness, or signed vs. unsigned. His words ('ear piercing screams') also evoke what happens when you put a mic too close to the speaker it feeds into, but I fail to see how one would achieve that kind of ringing by accident in software :) Would it also be an idea to dump the captured content to a supported container file, ideally without any additional processing of course, use that as the input, and try to analyse things from there? R. From cehoyos at ag.or.at Wed Mar 27 10:26:00 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Wed, 27 Mar 2013 09:26:00 +0000 (UTC) Subject: [Libav-user] QTKit -> Libav: has it ever been done? References: <37AD1A99-1816-4546-BCBE-B363FB840377@bighillsoftware.com> Message-ID: Ren? J.V. Bertin writes: > > On 27 March 2013 09:58, Carl Eugen Hoyos wrote: > > > not use QTKit? There is definitely a native decoder > > that outputs the format that (you believe) QTKit > > offers, and that code would not be OSX specific in > > the end. > > Are you saying there's a decoder that outputs the > decoded content in QTSampleBuffer format, tested to be > accepted as input by QTKit? No, I am not saying that. (I don't know.) I am saying that for every (input) format that the resampler (both the library and the filter) accepts, a (at least one) native decoder exists that produces this format (and can therefore be used to test the code that you use to resample). It is of course possible that QTKit uses a completely different format, but one way to find out is to test your resampling wrapper code with a decoder for which you know the actual format. Note that an endianess issue is very unlikely because FFmpeg only supports native endian audio formats (as opposed to codecs), a signed / unsigned problem is of course possible but this should be relatively easy to verify. Carl Eugen From rjvbertin at gmail.com Wed Mar 27 12:44:24 2013 From: rjvbertin at gmail.com (=?ISO-8859-1?Q?Ren=E9_J=2EV=2E_Bertin?=) Date: Wed, 27 Mar 2013 12:44:24 +0100 Subject: [Libav-user] QTKit -> Libav: has it ever been done? In-Reply-To: References: <37AD1A99-1816-4546-BCBE-B363FB840377@bighillsoftware.com> Message-ID: On 27 March 2013 10:26, Carl Eugen Hoyos wrote: > No, I am not saying that. > (I don't know.) Heh, I'd be surprised if it does exist. If indeed very few FFmpeg devs have OS X hardware, what would be the reason of existence for such a decoder? :) > It is of course possible that QTKit uses a completely > different format, but one way to find out is to test > your resampling wrapper code with a decoder for which > you know the actual format. Yes. And in parallel, one could dump the captured output, not in some container format as I suggested before, but the raw QTSampleBuffers. It shouldn't be too hard to extract the necessary API (structure definition(s), stub functions, etc) so that one can have platform-independent code for locating the data of interest in those imported QTSampleBuffers and feed it into libav. Provided of course that the encoding part of Brad's code doesn't make use of OS X specific programming language features like ARC. (Which is keeping me from playing with his code because it won't build on my older OS version ...) > Note that an endianess issue is very unlikely because FFmpeg > only supports native endian audio formats (as opposed to So if the QTSampleBuffers contain non-native endian data the FFmpeg-encoded output will inevitably be the "wrong way around" unless it is converted before being encoded. Not? > codecs), a signed / unsigned problem is of course possible > but this should be relatively easy to verify. There must be something relatively simple that underlies Brad's problem. After all, video encoding works fine, so his general approach cannot be completely wrong ... From onemda at gmail.com Wed Mar 27 13:19:40 2013 From: onemda at gmail.com (Paul B Mahol) Date: Wed, 27 Mar 2013 12:19:40 +0000 Subject: [Libav-user] QTKit -> Libav: has it ever been done? In-Reply-To: <37AD1A99-1816-4546-BCBE-B363FB840377@bighillsoftware.com> References: <37AD1A99-1816-4546-BCBE-B363FB840377@bighillsoftware.com> Message-ID: On 3/27/13, Brad O'Hearne wrote: > In lieu of having no luck over several weeks getting video + audio samples > captured from QTKit resampled and encoded with Libav to FLV (with video), > I've kind of hit a bit of a brick wall. The runnable Mac app and source > demonstrating this use case hasn't apparently shed any light on why the > audio being encoded is junk. While my gut all along has been that there's > something relatively simple at work, perhaps a pointer problem or something, > at the same time I've been over this fairly modest section of code so many > times I'm not sure what more to try. > > Again, my use case: > > QTKit capture (audio / video) -> convert (video)/resample (audio) -> encode > to FLV -> output (file or network stream) > > Up to this point, I've assumed all of the problem was in my code. I still > don't doubt that is likely the case, but given no headway looking for the > problem there, or to whatever degree anyone else has taken a look at the > sample app I provided it also hasn't rendered any real headway, I started > looking in other media libraries which either depend or appear to partially > depend on Libav. Basically what appears to be the case is that they don't > use Libav on the other side of QTKit capture, so as of now, I am not aware > of any example which is publicly available demonstrating that this has been > done. > > So at this point, it seems a decent question to consider if maybe the > problem lies in Libav. I think there's a pretty simple question to ask: has > a QTSampleBuffer ever been used to fill a sample array, resampled, and > encoded to FLV? -- i.e. does anyone actually know if this works? If so, I > would be much obliged if you could direct me to the unit test or code > example which demonstrates this, and I should be able to figure out what > deficiency exists in my code. If such a unit test or code example does *not* > exist, then I guess a question for the maintainers -- what do you think the > chances are that there's a bug somewhere in either resampling or the > adpcm_swf codec that could be affecting audio? > > Thanks, I greatly appreciate it. Provide raw output given by QTKit whatever, and I'm sure someone will give you solution for your problem. > > Brad > > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > From onemda at gmail.com Wed Mar 27 13:22:04 2013 From: onemda at gmail.com (Paul B Mahol) Date: Wed, 27 Mar 2013 12:22:04 +0000 Subject: [Libav-user] creating a new filter... In-Reply-To: <321B8C36-D2F3-46E4-ACE5-F2D8D1E5ABC6@gmail.com> References: <321B8C36-D2F3-46E4-ACE5-F2D8D1E5ABC6@gmail.com> Message-ID: On 3/26/13, Richard Schilling wrote: > Greetings. > > This is my first post. I looked in the listserv archives but didn't find > anyone talking about this, so here it goes. > > I need to implement a new audio (audio only) filter. I see the example code > in filtering_audio.c that uses a buffer sink. But, I'm having a hard time > finding particulars on what everything means. So, I'm hoping someone on > this list can help. > > My goal: I have an application that uses the FFMPG library. I need to > create a new custom audio filter, say my_filter.c. > > filtering_audio.c looks like the place to start. Is that correct? IIRC currently the only way to implement new filters is in libavfilter itself. > > In filtering_audio.c: > > * Can I get a basic walk-through of the code in the function init_filters? > it looks like there is a source (AVFilter *abuffersrc) and sink buffer > (AVFilter *abuffersing). This looks like two filters. How do I install > just one filter? I'm looking for a basic checklist here so I know that my > calls to avfilter_graph_create_filter, av_filtergraph_parse, etc ... are > correct ... .basically a walk-through of the example code. > > * In avfilter_asink_abuffer (buffersink.c) I see .inputs and .outputs > defined. .inputs defines an AVFilterPad called "default". .outputs defines > no (NULL) filter pads. How does this relate to this code in > filtering_audio.c? > > /* Endpoints for the filter graph. */ > outputs->name = av_strdup("in"); > outputs->filter_ctx = buffersrc_ctx; > outputs->pad_idx = 0; > outputs->next = NULL; > > inputs->name = av_strdup("out"); > inputs->filter_ctx = buffersink_ctx; > inputs->pad_idx = 0; > inputs->next = NULL; > > > Thank you in advance. > > Cheers, > Richard Schilling From cehoyos at ag.or.at Wed Mar 27 14:53:36 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Wed, 27 Mar 2013 13:53:36 +0000 (UTC) Subject: [Libav-user] --disable-asm --disable-yasm options References: , Message-ID: Dolevo Jay writes: > When I don't use --disable-yasm option, I got tons > of errors just because the yasm version that we are > using in openembedded is quite old and ffmpeg > requires a newer version of yasm. That is unexpected since the configure script tests the yasm version, you should therefore get only one error: Please provide your configure line and the first error (and the versions you are testing). > Updating yasm is fairly easy but then I got multiple > of tons of different errors due to compatibility > issues between different packages that are used in > openembedded. That also sounds unexpected: yasm is a static executable (that you can put into /usr/local/bin and) that imo simply cannot interfere with any other package. Carl Eugen From tksharpless at gmail.com Wed Mar 27 19:20:55 2013 From: tksharpless at gmail.com (Thomas Sharpless) Date: Wed, 27 Mar 2013 14:20:55 -0400 Subject: [Libav-user] setting up x264 Message-ID: I know the interface between libavcodec and libx264 is such that you can pass at least some of the native x264 option strings, such as preset names, through the opts argument to avcodec_open2(). However it also appears that this is not enough to put the codec into a usable state. Is it spelled out anywhere just what has to be set in the libavcodec contexts, and what can be set up with x264 options? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From brado at bighillsoftware.com Wed Mar 27 20:55:43 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Wed, 27 Mar 2013 12:55:43 -0700 Subject: [Libav-user] QTKit -> Libav: has it ever been done? In-Reply-To: References: <37AD1A99-1816-4546-BCBE-B363FB840377@bighillsoftware.com> Message-ID: On Mar 27, 2013, at 4:44 AM, Ren? J.V. Bertin wrote: > So if the QTSampleBuffers contain non-native endian data the > FFmpeg-encoded output will inevitably be the "wrong way around" unless > it is converted before being encoded. Not? Thanks for the replies everyone, as they all raise new ideas. I want to answer a few questions, and then revisit the situation asking a few further questions....perhaps we can get closer. First, regarding endian-ness. QTKit is using native-endian (which in OS X on Intel is little-endian), but more than that, I'm explicitly checking endian-ness on the sample buffer, and it is little endian. This is the reason I was asking if there was a function in FFmpeg which would output the endian-ness explicitly, just so I could verify that. But with what information I have, it appears we are going from little-endian to little-endian, so endian-ness shouldn't be an issue. Second, Paul said: > Provide raw output given by QTKit whatever, and I'm sure someone will > give you solution for your problem. I'll take you up on that. You'll have to give me a little bit to create this, but I'm going to provide two files, the first just the raw bytes of a QTKit sample buffer, and the second, a compressed FLV file created by FFmpeg containing a short encoded audio stream so that you can see what I'm working with. I'll post that later today. Finally, I want to revisit the scenario one more time with a few questions...maybe there's something in there that will turn on a light bulb (my own) somewhere. So here goes: When creating an AVOutputFormatContext using av_guess_format passing it an extension of "flv" and a MIME type of "video/x-flv" configures the context with the "adpcm_swf" audio codec, which requires a sample format of AV_SAMPLE_FMT_S16. The QTSampleBuffer format being captured from QTKIt is as follows: Linear PCM, 32 bit little-endian floating point, 2 channels, 44100 Hz This format would appear to map to the FFMpeg sample format of AV_SAMPLE_FMT_FLT. However, there's a difference in how QTKit is delivering the sample buffer data -- it isn't interleaved. In other words, channel 1 samples come before all channel 2 samples. So I then interleave this data (you can see this in the QTFFAVStreamer streamAudioFrame method of my sample app) to put it into AV_SAMPLE_FMT_FLT, prior to attempting any resampling, so that the resampling converts from AV_SAMPLE_FMT_FLT to AV_SAMPLE_FMT_S16. There's a very similar handling example I was referred to a while back by the QuickTime API mailing list which does this, you can see that here: http://git.videolan.org/?p=vlc.git;a=blob;f=modules/access/qtsound.m;h=4ff12309927591b749e40ccca9227fe6ba293711;hb=74a3b3f19f3f15843e913ce347c237eb23375f6f Unfortunately, it doesn't proceed with resampling or encoding with FFmpeg, so that's as far as I can follow the example. So if I understand the resampling process, here is what should be happening: decompressed audio samples in AV_SAMPLE_FMT_FLT -> [FFmpeg resample] -> decompressed audio samples in AV_SAMPLE_FMT_S16 -> [FFmpeg encoding] -> FLV file If there's any part of that which is inaccurate, please let me know. However, assuming that is accurate, I'm wondering if the resampling step is the problem, specifically the conversion of floats to signed 16-bits. I could perform the resampling manually, if I knew exactly how that conversion is occurring. This raises a couple of decent questions: 1. Regarding sample formats, what is the difference between AV_SAMPLE_FMT_S32 and AV_SAMPLE_FMT_FLT? Both are signed, both are 32 bits...? 2. How is a 32 bit float being converted to signed 16 bits? Once I know this, I'll write this manually and eliminate that from the equation too. 3. I have posted another message to the mailing list which hasn't been responded to, but I had several questions about packed samples and the align parameter in several libswresample function calls. In reading through the resampling_audio.c example, it wasn't clear to me the setting of this parameter to 0 vs. 1. I'll bump this message again in hopes of directing dialog on that topic there. Thanks again for all the discussion and help. I'll get those files posted later today, but in the meantime, the answers to the above questions would really help. Cheers, Brad From brado at bighillsoftware.com Wed Mar 27 21:00:49 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Wed, 27 Mar 2013 13:00:49 -0700 Subject: [Libav-user] libswresample, packed samples and alignment In-Reply-To: <5B7F649F-356B-43D2-9C26-83E9C3CE3E63@bighillsoftware.com> References: <5B7F649F-356B-43D2-9C26-83E9C3CE3E63@bighillsoftware.com> Message-ID: On Mar 26, 2013, at 12:14 PM, Brad O'Hearne wrote: > Hello, > > I've noticed that several functions in samplefmt.h take an "align" parameter, such as the function calls: > > int av_samples_alloc(uint8_t **audio_data, int *linesize, int nb_channels, > int nb_samples, enum AVSampleFormat sample_fmt, int align); > > int av_samples_get_buffer_size(int *linesize, int nb_channels, int nb_samples, > enum AVSampleFormat sample_fmt, int align); > > int av_samples_fill_arrays(uint8_t **audio_data, int *linesize, > const uint8_t *buf, > int nb_channels, int nb_samples, > enum AVSampleFormat sample_fmt, int align); > > This align parameter has the following description: > > * @param align buffer size alignment (0 = default, 1 = no alignment) > > I want to make sure that I'm properly understanding the purpose and setting of this parameter. As I understand it, a sample is "packed" if its sample bits occupy the entire available bits for the channel. If a sample's bits do not occupy the entire available bits for the channel it is not packed, and then the data is either high or low-aligned within the channel. > > In the case of my app, my sample format of captured audio is: > > Linear PCM, 32 bit little-endian floating point, 2 channels, 44100 Hz > > and this data IS indeed packed, meaning that there is neither high nor low alignment. In setting the appropriate align value for the aforementioned functions, I have two questions: > > 1. What is "default" alignment according to the documentation? Is that high or low, or something else? > > 2. Based on my captured sample data being packed, shouldn't this mean that there is NO alignment, and therefore the value for these method invocations be 1? This is essentially a "bump" given my reference to this issue in my post in another thread, but I'll add the detail that I revisited the resampling_audio.c example and I noticed that both 0 and 1 align parameter values are being used, and I wasn't completely clear as to why. If someone could speak to the questions I asked above, this would help to clear up the principle of the issue, plus it might have a bearing on the bigger audio problem I'm having. Thanks, Brad From onemda at gmail.com Wed Mar 27 21:08:37 2013 From: onemda at gmail.com (Paul B Mahol) Date: Wed, 27 Mar 2013 20:08:37 +0000 Subject: [Libav-user] QTKit -> Libav: has it ever been done? In-Reply-To: References: <37AD1A99-1816-4546-BCBE-B363FB840377@bighillsoftware.com> Message-ID: On 3/27/13, Brad O'Hearne wrote: > On Mar 27, 2013, at 4:44 AM, Rene J.V. Bertin wrote: >> So if the QTSampleBuffers contain non-native endian data the >> FFmpeg-encoded output will inevitably be the "wrong way around" unless >> it is converted before being encoded. Not? > > Thanks for the replies everyone, as they all raise new ideas. I want to > answer a few questions, and then revisit the situation asking a few further > questions....perhaps we can get closer. > > First, regarding endian-ness. QTKit is using native-endian (which in OS X on > Intel is little-endian), but more than that, I'm explicitly checking > endian-ness on the sample buffer, and it is little endian. This is the > reason I was asking if there was a function in FFmpeg which would output the > endian-ness explicitly, just so I could verify that. But with what > information I have, it appears we are going from little-endian to > little-endian, so endian-ness shouldn't be an issue. > > Second, Paul said: > >> Provide raw output given by QTKit whatever, and I'm sure someone will >> give you solution for your problem. > > I'll take you up on that. You'll have to give me a little bit to create > this, but I'm going to provide two files, the first just the raw bytes of a > QTKit sample buffer, and the second, a compressed FLV file created by FFmpeg > containing a short encoded audio stream so that you can see what I'm working > with. I'll post that later today. > > Finally, I want to revisit the scenario one more time with a few > questions...maybe there's something in there that will turn on a light bulb > (my own) somewhere. So here goes: > > When creating an AVOutputFormatContext using av_guess_format passing it an > extension of "flv" and a MIME type of "video/x-flv" configures the context > with the "adpcm_swf" audio codec, which requires a sample format of > AV_SAMPLE_FMT_S16. The QTSampleBuffer format being captured from QTKIt is as > follows: > > Linear PCM, 32 bit little-endian floating point, 2 channels, 44100 Hz > > This format would appear to map to the FFMpeg sample format of > AV_SAMPLE_FMT_FLT. However, there's a difference in how QTKit is delivering > the sample buffer data -- it isn't interleaved. In other words, channel 1 > samples come before all channel 2 samples. So I then interleave this data > (you can see this in the QTFFAVStreamer streamAudioFrame method of my sample > app) to put it into AV_SAMPLE_FMT_FLT, prior to attempting any resampling, > so that the resampling converts from AV_SAMPLE_FMT_FLT to AV_SAMPLE_FMT_S16. Than use AV_SAMPLE_FMT_FLTP, you do not need to manually interleave samples. Each channel samples are put into separate frame->data[X] where X is channel number starting from 0. > There's a very similar handling example I was referred to a while back by > the QuickTime API mailing list which does this, you can see that here: > > http://git.videolan.org/?p=vlc.git;a=blob;f=modules/access/qtsound.m;h=4ff12309927591b749e40ccca9227fe6ba293711;hb=74a3b3f19f3f15843e913ce347c237eb23375f6f > > Unfortunately, it doesn't proceed with resampling or encoding with FFmpeg, > so that's as far as I can follow the example. So if I understand the > resampling process, here is what should be happening: > > decompressed audio samples in AV_SAMPLE_FMT_FLT -> [FFmpeg resample] -> > decompressed audio samples in AV_SAMPLE_FMT_S16 -> [FFmpeg encoding] -> FLV > file > > If there's any part of that which is inaccurate, please let me know. > However, assuming that is accurate, I'm wondering if the resampling step is > the problem, specifically the conversion of floats to signed 16-bits. I > could perform the resampling manually, if I knew exactly how that conversion > is occurring. This raises a couple of decent questions: > > 1. Regarding sample formats, what is the difference between > AV_SAMPLE_FMT_S32 and AV_SAMPLE_FMT_FLT? Both are signed, both are 32 > bits...? > > 2. How is a 32 bit float being converted to signed 16 bits? Once I know > this, I'll write this manually and eliminate that from the equation too. > > 3. I have posted another message to the mailing list which hasn't been > responded to, but I had several questions about packed samples and the align > parameter in several libswresample function calls. In reading through the > resampling_audio.c example, it wasn't clear to me the setting of this > parameter to 0 vs. 1. I'll bump this message again in hopes of directing > dialog on that topic there. > > Thanks again for all the discussion and help. I'll get those files posted > later today, but in the meantime, the answers to the above questions would > really help. > > Cheers, > > Brad > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > From brado at bighillsoftware.com Wed Mar 27 21:24:59 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Wed, 27 Mar 2013 13:24:59 -0700 Subject: [Libav-user] QTKit -> Libav: has it ever been done? In-Reply-To: References: <37AD1A99-1816-4546-BCBE-B363FB840377@bighillsoftware.com> Message-ID: On Mar 27, 2013, at 1:08 PM, Paul B Mahol wrote: > Than use AV_SAMPLE_FMT_FLTP, you do not need to manually interleave samples. > Each channel samples are put into separate frame->data[X] where X is channel > number starting from 0. Hey thanks for the idea, Paul, I'll give it a shot! The resampling will still be converting floats to signed 16-bits, so I am very interested in exactly the conversion that is taking place here. I would think it shouldn't be just casting or truncating, it should be scaling the sample value based on the available storage space. Given 32-bit to 16-bit conversion, and that: Signed 16-bit (I'm assuming integral) = From ?32,768 to 32,767, or from ?(2^15) to 2^15 ? 1 and 32-bit float = (I'm just going to post a Wikipedia link here for brevity, but a float has one bit for sign, 8 bits for exponent, and 23 bits for float data): http://en.wikipedia.org/wiki/Single-precision_floating-point_format So I'm curious -- how is libswresample converting from float to signed 16-bit? Thx, Brad From onemda at gmail.com Wed Mar 27 21:47:29 2013 From: onemda at gmail.com (Paul B Mahol) Date: Wed, 27 Mar 2013 20:47:29 +0000 Subject: [Libav-user] QTKit -> Libav: has it ever been done? In-Reply-To: References: <37AD1A99-1816-4546-BCBE-B363FB840377@bighillsoftware.com> Message-ID: On 3/27/13, Brad O'Hearne wrote: > On Mar 27, 2013, at 1:08 PM, Paul B Mahol wrote: >> Than use AV_SAMPLE_FMT_FLTP, you do not need to manually interleave >> samples. >> Each channel samples are put into separate frame->data[X] where X is >> channel >> number starting from 0. > > Hey thanks for the idea, Paul, I'll give it a shot! The resampling will > still be converting floats to signed 16-bits, so I am very interested in > exactly the conversion that is taking place here. I would think it shouldn't > be just casting or truncating, it should be scaling the sample value based > on the available storage space. Given 32-bit to 16-bit conversion, and that: > > Signed 16-bit (I'm assuming integral) = From -32,768 to 32,767, or from > -(2^15) to 2^15 - 1 > > and > > 32-bit float = (I'm just going to post a Wikipedia link here for brevity, > but a float has one bit for sign, 8 bits for exponent, and 23 bits for float > data): http://en.wikipedia.org/wiki/Single-precision_floating-point_format > > So I'm curious -- how is libswresample converting from float to signed > 16-bit? There is source, why dont you take look yourself. > > Thx, > > Brad > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > From alexcohn at netvision.net.il Wed Mar 27 21:53:23 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Wed, 27 Mar 2013 22:53:23 +0200 Subject: [Libav-user] Libav, licensing, h.264 In-Reply-To: <7FC2D231-486D-4896-A77F-669D9B1CDF74@bighillsoftware.com> References: <20130327040400.GA3758@leki> <7FC2D231-486D-4896-A77F-669D9B1CDF74@bighillsoftware.com> Message-ID: On Wed, Mar 27, 2013 at 6:19 AM, Brad O'Hearne wrote: > On Mar 26, 2013, at 9:04 PM, Cl?ment B?sch wrote: > > Thanks for the reply, Clement! > >> The h264 decoder is under LGPL. > > I need not only the decoder, but the encoder as well. I read the link sent...I wasn't clear on it -- is the encoder under GPL? If so, do you have any idea what other commercial outfits are using the h.264 encoding? Surely they aren't all rolling their own h.264 codec... There exist some commercial alternatives. But to the best of my knowledge, x264 is simply superior both in terms of performance, features, and maintenence. As mentioned before, it is quite possible to obtain a x264 commercial license. This license does not cover the per-use fee to MPEG alliance, which is a body that represents the patent-holders of h264-related patents. Another important note: some platforms may support built-in h264 encoder, which may be limited in functionality, but accelerated in hardware (dedicated DSP or the graphical processor). The tricky part with that, you probably don't have to pay patent fees if you use such encoder. Sincerely, Alex Cohn From brado at bighillsoftware.com Wed Mar 27 22:03:39 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Wed, 27 Mar 2013 14:03:39 -0700 Subject: [Libav-user] QTKit -> Libav: has it ever been done? In-Reply-To: References: <37AD1A99-1816-4546-BCBE-B363FB840377@bighillsoftware.com> Message-ID: On Mar 27, 2013, at 1:08 PM, Paul B Mahol wrote: > Than use AV_SAMPLE_FMT_FLTP, you do not need to manually interleave samples. > Each channel samples are put into separate frame->data[X] where X is channel > number starting from 0. SHAZZZAMMM!!!! Paul, you are brilliant. Rather than rewrite the linear sample data as interleaved I took your advice, switched to planar AV_SAMPLE_FMT_S16P, pulled out each channel's data out of the AudioBufferList and set those buffer pointers in my uint8_t **sourceData structure, as in: sourceData[0] = audioBufferList->mBuffers[0].mData; sourceData[1] = audioBufferList->mBuffers[1].mData; The audio is perfect (sounding that is). But for getting past that wall -- I thank *everyone* on the list who replied...dialog is progressive, every comment can raise new avenues to look into. I am now scratching my head a bit as to what VLC was up to manually interleaving -- seems like unnecessary work now -- but I suppose they have a common interleaved format QTKit captures need to be converted to for universal downstream libVLC processing. Unfortunately, I cannot pop the champagne corks and blow off the fireworks quite yet. While the audio sounds great, the video timing is not aligned with the audio, and the video now freezes a short way in. While I enjoy the occasional Kung Fu Theater movie (probably dating myself a little there, that's a reference to Saturday afternoon English-dubbed karate movies as a kid), I'm guessing my customers won't find the humor. I am posting a link to the encoded FLV file: https://www.dropbox.com/s/wsol1pd9vv3adrz/Output.flv If any of you experts could lend guidance to how to iron out these timing issues, I would be much obliged. My initial hunch is the setting of the AVPacket dts and pts values and/or use of av_interleaved_write_frame vs av_write_frameI posted another message a while back about this which was never replied to, but I'm not completely clear as to when to use either. I've also had interesting results using each with only video, only audio, or both video and audio, so unless someone just happens to know off the top of their head what the problem is, I'll create videos for each scenario and pursue that discussion in a different thread, as this one has gotten lengthy, and the topic is somewhat shifting. But to conclude this thread -- a tremendous thank you to everyone who has contributed to the discussion. Cheers, Brad From coderroadie at gmail.com Wed Mar 27 23:30:47 2013 From: coderroadie at gmail.com (Richard Schilling) Date: Wed, 27 Mar 2013 15:30:47 -0700 Subject: [Libav-user] error when calling avfilter_graph_create_filter Message-ID: I think I'm passing all the right parameters to avfilter_graph_create_filter, but I still get a return value of -22, which seems to indicate a parameter problem. I'm missing some detail here?. Can anyone tell me what's wrong with the call to avfilter_graph_create_filter in the function below? Thanks. int filtersetup(){ AVFilterContext *in_filter_ctx; AVFilterContext *out_filter_ctx; AVFilterGraph *graph; AVFilter *input_filter, *output_filter; char *sample_fmts, *sample_rates, *channel_layouts; AVFilterInOut *outputs, *inputs; char args[256]; int ret; outputs = avfilter_inout_alloc(); inputs = avfilter_inout_alloc(); graph = avfilter_graph_alloc(); input_filter = avfilter_get_by_name("abuffer"); output_filter = avfilter_get_by_name("abuffersink"); __android_log_print(ANDROID_LOG_INFO, "aphex_dsp", "alloc"); ret = avfilter_graph_create_filter(&in_filter_ctx, input_filter, "src", args, NULL, graph); if (ret < 0) { // ret = -22 __android_log_print(ANDROID_LOG_ERROR, "aphex_dsp", "unable to create input filter: %d", ret); return ret; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From coderroadie at gmail.com Wed Mar 27 23:48:18 2013 From: coderroadie at gmail.com (Richard Schilling) Date: Wed, 27 Mar 2013 15:48:18 -0700 Subject: [Libav-user] creating a new filter... In-Reply-To: References: <321B8C36-D2F3-46E4-ACE5-F2D8D1E5ABC6@gmail.com> Message-ID: <78E1B518-E4B1-45E3-81B0-034E1C53D310@gmail.com> Thanks for the heads up. But, I'm on acronym overload. What is IIRC exactly, and what does it mean to FFMPEG. Sorry - I'm relatively new to the FFMPEG library. I'm a bit confused, because it seems that FFMPEG should allow me to write a new filter and add it to the library. And the examples provided in the code seem to suggest that i can use filters programmatically, as opposed to just on the command line. I guess I need a better guide on the ins and outs of doing that. Thanks. Richard On Mar 27, 2013, at 5:22 AM, Paul B Mahol wrote: > On 3/26/13, Richard Schilling wrote: >> Greetings. >> >> This is my first post. I looked in the listserv archives but didn't find >> anyone talking about this, so here it goes. >> >> I need to implement a new audio (audio only) filter. I see the example code >> in filtering_audio.c that uses a buffer sink. But, I'm having a hard time >> finding particulars on what everything means. So, I'm hoping someone on >> this list can help. >> >> My goal: I have an application that uses the FFMPG library. I need to >> create a new custom audio filter, say my_filter.c. >> >> filtering_audio.c looks like the place to start. Is that correct? > > IIRC currently the only way to implement new filters is in libavfilter itself. > >> >> In filtering_audio.c: >> >> * Can I get a basic walk-through of the code in the function init_filters? >> it looks like there is a source (AVFilter *abuffersrc) and sink buffer >> (AVFilter *abuffersing). This looks like two filters. How do I install >> just one filter? I'm looking for a basic checklist here so I know that my >> calls to avfilter_graph_create_filter, av_filtergraph_parse, etc ... are >> correct ... .basically a walk-through of the example code. >> >> * In avfilter_asink_abuffer (buffersink.c) I see .inputs and .outputs >> defined. .inputs defines an AVFilterPad called "default". .outputs defines >> no (NULL) filter pads. How does this relate to this code in >> filtering_audio.c? >> >> /* Endpoints for the filter graph. */ >> outputs->name = av_strdup("in"); >> outputs->filter_ctx = buffersrc_ctx; >> outputs->pad_idx = 0; >> outputs->next = NULL; >> >> inputs->name = av_strdup("out"); >> inputs->filter_ctx = buffersink_ctx; >> inputs->pad_idx = 0; >> inputs->next = NULL; >> >> >> Thank you in advance. >> >> Cheers, >> Richard Schilling > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user From xuanyu.huang at gmail.com Wed Mar 27 23:57:08 2013 From: xuanyu.huang at gmail.com (=?GB2312?B?u8bQ+dPu?=) Date: Thu, 28 Mar 2013 09:57:08 +1100 Subject: [Libav-user] failed to open a flash screen video Message-ID: Hi Guys I'm using FFmpeg 1.1.3 and had a problem in opening a flash sc video. av_open_input_file succeeded but avformat_find_stream_info returns -1. FFmpeg output blow logs: 00:00:16.115 MAIN FFMPEG: Format flv probed with size=2048 and score=100 00:00:18.221 MAIN FFMPEG: File position before avformat_find_stream_info() is 13 00:00:18.284 MAIN FFMPEG: Could not find codec parameters for stream 0 (Video: flashsv): unspecified size Consider increasing the value for the 'analyzeduration' and 'probesize' options 00:00:18.284 MAIN FFMPEG: File position after avformat_find_stream_info() is 1263968 video link is here: https://dl.dropbox.com/u/89678527/flash-screen.flv and ffprobe output was $ ffprobe.exe flash-screen.flv ffprobe version N-50025-gb8bb661 Copyright (c) 2007-2013 the FFmpeg developers built on Feb 17 2013 02:37:45 with gcc 4.7.2 (GCC) configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib libavutil 52. 17.101 / 52. 17.101 libavcodec 54. 91.103 / 54. 91.103 libavformat 54. 63.100 / 54. 63.100 libavdevice 54. 3.103 / 54. 3.103 libavfilter 3. 38.100 / 3. 38.100 libswscale 2. 2.100 / 2. 2.100 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 Input #0, flv, from 'flash-screen.flv': Duration: 00:00:25.97, start: 0.000000, bitrate: 389 kb/s Stream #0:0: Video: flashsv, bgr24, 466x311, 3 tbr, 1k tbn, 1k tbc and ffplay can play the video successfully (ffprobe and ffplay were downloaded Zeranoe build and FFmpeg 1.1.3 was build by myself) Great thanks for any help -------------- next part -------------- An HTML attachment was scrubbed... URL: From goverall at hotmail.com Thu Mar 28 02:29:22 2013 From: goverall at hotmail.com (Gary Overall) Date: Wed, 27 Mar 2013 21:29:22 -0400 Subject: [Libav-user] Source code debugging libav using Xcode Message-ID: I am attempting an OSX project using the libav libraries using Xcode. I am able to successfully build static libraries using ./configure make and integrate them into my project. Everything is fine there. I am now interested in creating a project where I can source-code debug down into the libraries themselves using my Xcode development platform. Is this possible? If so can someone please point me it the right direction. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jettoblack at gmail.com Thu Mar 28 02:31:52 2013 From: jettoblack at gmail.com (Jason Livingston) Date: Wed, 27 Mar 2013 21:31:52 -0400 Subject: [Libav-user] Source code debugging libav using Xcode In-Reply-To: References: Message-ID: On Wed, Mar 27, 2013 at 9:29 PM, Gary Overall wrote: > I am attempting an OSX project using the libav libraries using Xcode. I am > able to successfully build static libraries using ./configure make and > integrate them into my project. Everything is fine there. I am now > interested in creating a project where I can source-code debug down into the > libraries themselves using my Xcode development platform. Is this > possible? If so can someone please point me it the right direction. ./configure --enable-debug --disable-stripping Works in Xcode for shared ffmpeg libs, not sure about static linking. From ffmpeg at gmail.com Thu Mar 28 02:35:19 2013 From: ffmpeg at gmail.com (Geek.Song) Date: Thu, 28 Mar 2013 09:35:19 +0800 Subject: [Libav-user] Source code debugging libav using Xcode In-Reply-To: References: Message-ID: On Thu, Mar 28, 2013 at 9:31 AM, Jason Livingston wrote: > On Wed, Mar 27, 2013 at 9:29 PM, Gary Overall > wrote: > > I am attempting an OSX project using the libav libraries using Xcode. I > am > > able to successfully build static libraries using ./configure make and > > integrate them into my project. Everything is fine there. I am now > > interested in creating a project where I can source-code debug down into > the > > libraries themselves using my Xcode development platform. Is this > > possible? If so can someone please point me it the right direction. > > ./configure --enable-debug --disable-stripping > > Works in Xcode for shared ffmpeg libs, not sure about static linking. > Static linking is OK as well -- ----------------------------------------------------------------------------------------- My key fingerprint: d1:03:f5:32:26:ff:d7:3c:e4:42:e3:51:ec:92:78:b2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kalileo at universalx.net Thu Mar 28 03:32:05 2013 From: kalileo at universalx.net (Kalileo) Date: Thu, 28 Mar 2013 09:32:05 +0700 Subject: [Libav-user] setting up x264 In-Reply-To: References: Message-ID: <3E53FECD-3162-41ED-A56C-92B8F7D0419D@universalx.net> On Mar 28, 2013, at 01:20 , Thomas Sharpless wrote: > I know the interface between libavcodec and libx264 is such that you can pass at least some of the native x264 option strings, such as preset names, through the opts argument to avcodec_open2(). However it also appears that this is not enough to put the codec into a usable state. > > Is it spelled out anywhere just what has to be set in the libavcodec contexts, and what can be set up with x264 options? > > Thanks > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user Hi Thomas and all, I found that you need to run avformat_find_stream_info to completely set up the codec, unfortunately. If you work with known / self produced streams (as I do), you should know all settings, and you should be able to manually set all required settings. Unfortunately I did not mange yet to get it to work reliably without avformat_find_stream_info, despite going through the avformat_find_stream_info code looking for clues. Please see my attached question posted here earlier (which also contains some av_dict_set() examples which might help you) which I think asks the same question as you do just in other words. > From: Kalileo > Subject: avformat_open_input using custom AVDictionary to set video_size > Date: January 24, 2013 04:35:58 GMT+07:00 > To: "ffmpeg libav-user" > > I'm producing MPEGTS streams (h264/aac) using ffmpeg, so I know exactly how they are coded, and I could modify that if needed. > > Now, when I receive them in my own code, I always have to run them through avformat_open_input and avformat_find_stream_info to get the settings into the codec context. > > Because udp mpegts streams have no header, both avformat_open_input and avformat_find_stream_info sometimes have problems to get all settings correctly. > > However, all settings are known, and so I try to inform avformat_open_input and avformat_find_stream_info about them, using a AVDictionary. > > However, despite setting the video_size to 720x576, I often get "unspecified size". Other AVDictionary options such as "probesize" are honored though. > > Ideally I want to skip the avformat_find_stream_info step completely and preset all settings. But how to do that correctly? > > Also sometimes avformat_open_input does not get it correctly that there is a video and an audio stream. > > I'm reading the stream though a memory buffer, and not directly from a file or url. This is what I tried: > > ====== > av_dict_set(&format_opts, "video_size", "720x576", 0); <<== seems to be ignored > // av_dict_set(&format_opts, "pixel_format", av_get_pix_fmt_name(ffmpeg::PIX_FMT_YUV420P), 0); > av_dict_set(&format_opts, "pixel_format", "yuv420p", 0); > av_dict_set(&format_opts, "sample_format", "fltp", 0); > av_dict_set(&format_opts, "analyzeduration", "8000000", 0); <<== honored > av_dict_set(&format_opts, "probesize", "8000000", 0); <<== honored > av_dict_set(&format_opts, "channels", "2", 0); > av_dict_set(&format_opts, "sample_rate", "48000", 0); > av_dict_set(&format_opts, "seekable", "0", 0); > ... > err = avformat_open_input(&pFormatCtx, "", inputFmt, &format_opts); > > => sometimes sees one stream only > > av_dict_set(&format_opts, "video_size", "720x576", 0); <<== seems to be ignored > ... > err = avformat_find_stream_info(pFormatCtx, &format_opts); > > > av_dump_format(pFormatCtx, 0, "Memory Buffer", false); > > => often says "unspecified size" > ====== > > I'm using the very latest ffmpeg 1.0.3 from 2 days ago. > > > My Questions are: > > How do I pass the video size to avformat_open_input and avformat_find_stream_info so that it is taken? > > Which settings do i have to set to set (and how) to avoid the need for avformat_find_stream_info? > > How do i tell avformat_open_input that there is a video stream and an audio stream, or help it to find them (possible a change in the encoding parameters)? > > > Thanks for any hints or help! > > Regards, > Kalileo > From brado at bighillsoftware.com Thu Mar 28 04:19:47 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Wed, 27 Mar 2013 20:19:47 -0700 Subject: [Libav-user] Video and audio timing / syncing Message-ID: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> Onward! It sure feels like I'm on the verge of knocking this use case out that I've been working on. With a little more help, I think a win is not far off... So coming in off of QTKit capture and being processed using FFmpeg, I've now got perfect (sounding) audio. It would also appear that I have proper video too. However, the audio and video are out of sync -- great for a dubbed karate movie, not so great for customer usage (though I recommend they all learn karate). My suspicions are the following: 1. Either it has something to do with the AVPacket pts and duration settings, which I am setting using time_base adjusted values for presentationTime and duration coming from the QTSampleBuffer. 2. That the codec is not properly queuing / arranging the video frames and audio sample buffers in proper order. I'm not familiar with the internals of the adpcm_swf codec, or most video codecs in general, but I have had a question about timing. QTKit delivers captured audio samples and video frames on different callbacks, so technically, the time and frequency each are received are arbitrary. I'm processing each when received, so I believe in theory it might be possible for video frames and audio samples to be received out of order. Is the codec smart enough to arrange these in proper order? Is there a setting on the codec which governs or affects whatever internal queue or buffer time the codec waits for another packet with an earlier presentation time before outputting the buffer last sent to it? I would think the codec would have to be smart enough to handle this in encoding. However, after reading the source in this example: http://ffmpeg.org/doxygen/0.6/output-example_8c-source.html it appears that the sample is managing the order in which video and audio frames are written. Any insight into the proper approach? Other than that one example, I've seen no other examples which worry about sorting audio and video packets based on pts, they all appear to let that be something the codec worries about. If this isn't the source of the problem, can someone enlighten me as to what the usual suspects are when you have out of sync audio and video? Thanks, Brad From garyoverall at gmail.com Thu Mar 28 02:20:40 2013 From: garyoverall at gmail.com (Gary Overall) Date: Wed, 27 Mar 2013 21:20:40 -0400 Subject: [Libav-user] Compiling using Xcode Message-ID: <23CE34BD-2B75-40C4-BBBD-0C143D34C5B5@gmail.com> I am attempting an OSX project using the libav libraries using Xcode. I am able to successfully build static libraries using ./configure make and integrate them into my project. Everything is time there. I am now interested in creating a project where I can source-code debug down into the libraries themselves using my Xcode development platform. Is this possible? If so can someone please point me it the right direction. From ubitux at gmail.com Thu Mar 28 05:25:45 2013 From: ubitux at gmail.com (=?utf-8?B?Q2zDqW1lbnQgQsWTc2No?=) Date: Thu, 28 Mar 2013 05:25:45 +0100 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> Message-ID: <20130328042544.GG3758@leki> On Wed, Mar 27, 2013 at 08:19:47PM -0700, Brad O'Hearne wrote: [...] > I would think the codec would have to be smart enough to handle this in encoding. However, after reading the source in this example: > > http://ffmpeg.org/doxygen/0.6/output-example_8c-source.html > You realize FFmpeg 0.6 is 3 years old right? The examples have moved to a dedicated directory now, which is doc/examples. You can find them on http://git.videolan.org/?p=ffmpeg.git;a=tree;f=doc/examples;hb=HEAD or simply deployed with your installation (generally /usr/share/ffmpeg/doc/examples). For a recent doxygen, see https://ffmpeg.org/doxygen/trunk/index.html If the examples and doxy documentation are not enough, you should have a look at ffmpeg*.c ffplay*.c files in the source root directory. -- Cl?ment B. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 490 bytes Desc: not available URL: From brado at bighillsoftware.com Thu Mar 28 06:03:22 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Wed, 27 Mar 2013 22:03:22 -0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: <20130328042544.GG3758@leki> References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> Message-ID: <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> On Mar 27, 2013, at 9:25 PM, Cl?ment B?sch wrote: > You realize FFmpeg 0.6 is 3 years old right? I know it very well -- but with FFmpeg documentation / examples, new doesn't necessarily mean more helpful. I've spent more hours than I'd like to count scouring the Internet for anything that could shed light on various aspects of getting my use case built. That 3 year old example is one of the only video / audio encoding examples I've found that even addresses pts / dts. Take a look at the current decoding_encoding.c in the samples you refer to. It doesn't address this at all. In fact, the example itself is very limited in usefulness, as it isn't really addressing what would likely be a real-world use-case. For starters, video and audio encoding and output are completely separate, stand-alone examples, rather than having both audio and video simultaneously in play. The audio data is completely contrived -- it is bogus sound generated internally in the app, not drawn from an external capture source, file, or stream. Neither video or audio encoding even have to deal with pts, other than this one line in the video encoding: frame->pts = i; which is a fabricated scenario of a hard-coded index rather than pulling decoding and presentation times and durations from a foreign source and scaling them properly to the time_base in question, and which then plays into the audio / video time sync issue mentioned. > If the examples and doxy documentation are not enough, you should have a > look at ffmpeg*.c ffplay*.c files in the source root directory. I'll take a look. As a point of constructive encouragement, the documentation and examples could really use some improvement so that they are analog with common use cases out there. Granted video / audio is a complicated domain, but the API is way, way too hard and time consuming to use and figure out over what it could be. I was actually pretty floored when I didn't find a whole host of examples for the very use case I've been struggling with -- I would have expected "I'd like to stream my webcam to Flash" to have historically been a pretty common need, especially on OS X, given there's virtually nothing in the way of outbound network or media protocols in the Cocoa API. That's actually one other reason I've posted my code on GitHub, in hopes of saving someone some time down the road in getting something built. But I digress...back to the task at hand...getting video and audio sync'd up. Thanks for the pointer Clement, I'll take a look at those.... Brad From kalileo at universalx.net Thu Mar 28 07:48:34 2013 From: kalileo at universalx.net (Kalileo) Date: Thu, 28 Mar 2013 13:48:34 +0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> Message-ID: On Mar 28, 2013, at 12:03 , Brad O'Hearne wrote: > On Mar 27, 2013, at 9:25 PM, Cl?ment B?sch wrote: > >> You realize FFmpeg 0.6 is 3 years old right? > > I know it very well -- but with FFmpeg documentation / examples, new doesn't necessarily mean more helpful. I've spent more hours than I'd like to count scouring the Internet for anything that could shed light on various aspects of getting my use case built. That 3 year old example is one of the only video / audio encoding examples I've found that even addresses pts / dts. Take a look at the current decoding_encoding.c in the samples you refer to. It doesn't address this at all. In fact, the example itself is very limited in usefulness, as it isn't really addressing what would likely be a real-world use-case. > Although the example you are using might be three years old, that it does not mean that you cannot use recent ffmpeg. You might have to change a few function calls, but other than that it should work. I don't claim to be an expert, but here are some points which I remember when I built an encoder using the ffmpeg library: First of all, you should exclude that you introduce in your Player what could be responsible for audio/video not being synchronous. (I assume you did, but you are not mentioning it.) When you encode audio and video, you'll feed each packet with the dts and pts value. The encoding function for video and the encoding function for audio do not know each other, they do not communicate. You have to set these values for them, and pass them in when you write the already encoded packet. As far as I remember, the write function does not set these values, but only checks that what you pass in is plausible. If you mix already encoded audio and video, you have to remux it, which is basically the same writing of packets as after encoding, and in that process set the correct values for dts and pts. Once you understand that you are responsible for setting these values, and that there is no magic communication between audio and video involved, it is quite simple. if you base your dts and pts values on the time when you received the data after it went through various buffers, you have to take the delay caused by these buffers into consideration. Hth, regards, Kalileo From rjvbertin at gmail.com Thu Mar 28 09:29:52 2013 From: rjvbertin at gmail.com (=?ISO-8859-1?Q?Ren=E9_J=2EV=2E_Bertin?=) Date: Thu, 28 Mar 2013 09:29:52 +0100 Subject: [Libav-user] Source code debugging libav using Xcode In-Reply-To: References: Message-ID: <83954040-dfad-4402-90ed-66c1423df68a@email.android.com> In xcode 3, I simply add a new external build target to which I add the ffmpeg source tree. Add that target as a dependency to your own target, and you should be set. If you want to be extra sure, build ffmpeg with the same compiler you use in your project. BTW, I have a git project up over on github.com/RJVB that shows how to make a monolithic framework out of the FFmpeg libs . From cehoyos at ag.or.at Thu Mar 28 10:27:15 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Thu, 28 Mar 2013 09:27:15 +0000 (UTC) Subject: [Libav-user] creating a new filter... References: <321B8C36-D2F3-46E4-ACE5-F2D8D1E5ABC6@gmail.com> <78E1B518-E4B1-45E3-81B0-034E1C53D310@gmail.com> Message-ID: Richard Schilling writes: > I'm a bit confused, because it seems that FFMPEG should > allow me to write a new filter and add it to the library. It definitely does. Either send your patch (that implements the new filter) to ffmpeg-devel or set up a git clone and ask for a merge on ffmpeg-devel. Carl Eugen From stefasab at gmail.com Thu Mar 28 11:16:17 2013 From: stefasab at gmail.com (Stefano Sabatini) Date: Thu, 28 Mar 2013 11:16:17 +0100 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> Message-ID: On Thu, Mar 28, 2013 at 6:03 AM, Brad O'Hearne wrote: > On Mar 27, 2013, at 9:25 PM, Cl?ment B?sch wrote: > >> You realize FFmpeg 0.6 is 3 years old right? > > I know it very well -- but with FFmpeg documentation / examples, new doesn't necessarily mean more helpful. I've spent more hours than I'd like to count scouring the Internet for anything that could shed light on various aspects of getting my use case built. That 3 year old example is one of the only video / audio encoding examples I've found that even addresses pts / dts. Take a look at the current decoding_encoding.c in the samples you refer to. It doesn't address this at all. In fact, the example itself is very limited in usefulness, as it isn't really addressing what would likely be a real-world use-case. libavformat/output-example.c was renamed to doc/examples/muxing.c, which is possibly cleaner and updated to the new API. decoding_encoding.c is meant as an usage example of the low-level encoding/decoding API. [...] > I'll take a look. As a point of constructive encouragement, the documentation and examples could really use some improvement so that they are analog with common use cases out there. Granted video / audio is a complicated domain, but the API is way, way too hard and time consuming to use and figure out over what it could be. I was actually pretty floored when I didn't find a whole host of examples for the very use case I've been struggling with -- I would have expected "I'd like to stream my webcam to Flash" to have historically been a pretty common need, especially on OS X, given there's virtually nothing in the way of outbound network or media protocols in the Cocoa API. That's actually one other reason I've posted my code on GitHub, in hopes of saving someone some time down the road in getting something built. We're lacking a complete (updated) tutorial, more examples and possibly a more high-level API for dealing with it. You're welcome to suggest what's missing as a feature request on trac, or send your own contribution (an example dealing with external APIs may be useful in doc/examples, even if it could be more complicated to test/integrate). At some point we could create some crowd-funding project to add the missing pieces, as everyone here seems busy with other stuff and no one ever before sponsored such a task (but keep in mind that documentation is improved day by day). Also we have a wiki which could be used for user-contributed coding examples and documentation. > But I digress...back to the task at hand...getting video and audio sync'd up. Thanks for the pointer Clement, I'll take a look at those.... [...] From mybrokenbeat at gmail.com Thu Mar 28 11:21:09 2013 From: mybrokenbeat at gmail.com (Oleg) Date: Thu, 28 Mar 2013 12:21:09 +0200 Subject: [Libav-user] Compiling using Xcode In-Reply-To: <23CE34BD-2B75-40C4-BBBD-0C143D34C5B5@gmail.com> References: <23CE34BD-2B75-40C4-BBBD-0C143D34C5B5@gmail.com> Message-ID: <3F2999B0-98A0-49FF-BDFF-1FF1C1102337@gmail.com> Short answer - there is no easy way for this. But you can do following: 1. Integrate ffmpeg into xcode project with "XCode->New->Target->Other->External build system". You can find(google) lot of examples on how to use it with configure & make. This will give you ability to "Build", "Clean" and "Run". I don't know exactly, but probably you should also run configure script with --extra-cflags=-g to be sure that debugging symbols are produced in ffmpeg's libraries. Also you must be sure, you're using same compiler and debugger both for xcode project and for building ffmpeg libraries (best of all it would be clang gcc and gdb). Otherwise, you will see strange things while debugging. or 2. Integrate ffmpeg as separate xcode project, with different targets that produce libav* libraries. It's much harder than #1 option, but it has some advantages. As well you also need here additional "External build system" for building files with "yasm" compiler. 28.0 > I am attempting an OSX project using the libav libraries using Xcode. I am able to successfully build static libraries using ./configure make and integrate them into my project. Everything is time there. I am now interested in creating a project where I can source-code debug down into the libraries themselves using my Xcode development platform. Is this possible? If so can someone please point me it the right direction. > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user From andy.y.huang at gmail.com Thu Mar 28 16:45:35 2013 From: andy.y.huang at gmail.com (Andy Huang) Date: Thu, 28 Mar 2013 10:45:35 -0500 Subject: [Libav-user] av_read_frame Message-ID: Hi, I am trying to write a mpeg player for iOS, for some reason, same code that calls "av_read_frame" has different result on Windows and iOS. after calling "av_read_frame", "stream_index" field inside " AVPacket" has non-zero value in iOS whereas on Windows it is always 0, when trying to play same stream. The code snipper is following: // header file class LibavDecoder { .......... AVFormatContext *avFormatContextPtr; AVPacket avpkt; } // cpp file void LibavDecoder::initStreams (AVFormatContext *avFormatContextPtr) { AVStream *tempStream = NULL; AVCodec *tempCodec = NULL; StreamConfig tempConfig; for (size_t streamcnt = 0; streamcnt < avFormatContextPtr->nb_streams; ++streamcnt) { tempStream = avFormatContextPtr->streams[streamcnt]; if ((tempStream->codec->codec_type == AVMEDIA_TYPE_VIDEO) || (tempStream->codec->codec_type == AVMEDIA_TYPE_AUDIO)) { tempConfig.stream = tempStream; tempCodec = avcodec_find_decoder(tempStream->codec->codec_id); tempConfig.codecContext = tempStream->codec; tempConfig.frameCnt = 0; avcodec_open2(tempConfig.codecContext, tempCodec, NULL); this->streamconfigs.push_back(tempConfig); } } } StreamConfig* LibavDecoder::getNextFrame (AVFormatContext* avFormatContextPtr, AVPacket* avpkt) { int loop = 1; int err = 0; size_t configcnt = 0; StreamConfig *tempConfig = 0; while (loop == 1) { err = *av_read_frame*(avFormatContextPtr, avpkt); if (err < 0) { if (err != -11) //Ressource not available, try again { if ((size_t)err != AVERROR_EOF) { error("Error while av_read_frame", err); } loop = 0; } } else { configcnt = 0; while ((loop == 1) && (configcnt < this->streamconfigs.size())) { if (this->streamconfigs.at(configcnt).stream->index == * avpkt->stream_index*) { tempConfig = &this->streamconfigs.at(configcnt); loop = 0; } configcnt++; } } if (loop == 1) av_free_packet(avpkt); } return tempConfig; } any ideas while this happens? Thanks a bunch in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brado at bighillsoftware.com Thu Mar 28 21:03:03 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Thu, 28 Mar 2013 13:03:03 -0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> Message-ID: <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> On Mar 27, 2013, at 11:48 PM, Kalileo wrote: > When you encode audio and video, you'll feed each packet with the dts and pts value. The encoding function for video and the encoding function for audio do not know each other, they do not communicate. You have to set these values for them, and pass them in when you write the already encoded packet. As far as I remember, the write function does not set these values, but only checks that what you pass in is plausible. > If you mix already encoded audio and video, you have to remux it, which is basically the same writing of packets as after encoding, and in that process set the correct values for dts and pts. > > Once you understand that you are responsible for setting these values, and that there is no magic communication between audio and video involved, it is quite simple. > > if you base your dts and pts values on the time when you received the data after it went through various buffers, you have to take the delay caused by these buffers into consideration. Kalileo -- thanks for the reply. I can give a little more detail to the nature of the problem I'm experiencing. What I first thought was the video hanging half-way through the video wasn't that at all -- it was the video actually ending. The audio plays perfectly and sounds exactly as expected. But the video is playing at a much faster rate, and just completes in about half the time as the audio, and so it stops on the last frame. So the video is the problem -- audio is now perfect. I have been setting the pts and dts values on each video's AVPacket, and on this point, the muxing.c example file which another poster mentioned doesn't really clarify the issue for me. That example isn't receiving video frames arbitrarily from an external source, but rather, it is generating these frames and the pts values in an organized loop in sequential fashion. The example is also essential orchestrating interleaving itself because it can -- that is, because it is generating its own data, it can alternate calls to write_frame for audio and video. It's weird, its as if the video is playing at twice the speed of the audio. ... Brad From goverall at hotmail.com Fri Mar 29 01:02:27 2013 From: goverall at hotmail.com (Gary Overall) Date: Thu, 28 Mar 2013 20:02:27 -0400 Subject: [Libav-user] Source code debugging libav using Xcode In-Reply-To: <83954040-dfad-4402-90ed-66c1423df68a@email.android.com> References: , , <83954040-dfad-4402-90ed-66c1423df68a@email.android.com> Message-ID: Thank you for your help. I feel like I am very close. I did as you said, and it did actually build the libav libraries as I compiled and ran my project within Xcode. I had to put the location of the libraries in my header search path to to be able to include the .h files. However, when I try to call anything in the libraries (ie av_register_all();) I get the following linker error. I just assumed that since I put the ffmpeg source tree as a target dependency, that the linker would find the library files. Am I still missing a critical step?? "_av_register_all", referenced from: -[FFAppDelegate applicationDidFinishLaunching:] in FFAppDelegate.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) It found the .h files, but it did not seem to find the libraries. Thanks,Gary O > From: rjvbertin at gmail.com > Date: Thu, 28 Mar 2013 09:29:52 +0100 > To: libav-user at ffmpeg.org > Subject: Re: [Libav-user] Source code debugging libav using Xcode > > In xcode 3, I simply add a new external build target to which I add the ffmpeg source tree. Add that target as a dependency to your own target, and you should be set. If you want to be extra sure, build ffmpeg with the same compiler you use in your project. > BTW, I have a git project up over on github.com/RJVB that shows how to make a monolithic framework out of the FFmpeg libs . > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From kalileo at universalx.net Fri Mar 29 07:53:03 2013 From: kalileo at universalx.net (Kalileo) Date: Fri, 29 Mar 2013 13:53:03 +0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> Message-ID: <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> On Mar 29, 2013, at 03:03 , Brad O'Hearne wrote: > On Mar 27, 2013, at 11:48 PM, Kalileo wrote: > >> When you encode audio and video, you'll feed each packet with the dts and pts value. The encoding function for video and the encoding function for audio do not know each other, they do not communicate. You have to set these values for them, and pass them in when you write the already encoded packet. As far as I remember, the write function does not set these values, but only checks that what you pass in is plausible. >> If you mix already encoded audio and video, you have to remux it, which is basically the same writing of packets as after encoding, and in that process set the correct values for dts and pts. >> >> Once you understand that you are responsible for setting these values, and that there is no magic communication between audio and video involved, it is quite simple. >> >> if you base your dts and pts values on the time when you received the data after it went through various buffers, you have to take the delay caused by these buffers into consideration. > > Kalileo -- thanks for the reply. I can give a little more detail to the nature of the problem I'm experiencing. What I first thought was the video hanging half-way through the video wasn't that at all -- it was the video actually ending. The audio plays perfectly and sounds exactly as expected. But the video is playing at a much faster rate, and just completes in about half the time as the audio, and so it stops on the last frame. So the video is the problem -- audio is now perfect. > > I have been setting the pts and dts values on each video's AVPacket, and on this point, the muxing.c example file which another poster mentioned doesn't really clarify the issue for me. That example isn't receiving video frames arbitrarily from an external source, but rather, it is generating these frames and the pts values in an organized loop in sequential fashion. The example is also essential orchestrating interleaving itself because it can -- that is, because it is generating its own data, it can alternate calls to write_frame for audio and video. > > It's weird, its as if the video is playing at twice the speed of the audio. > Hi Brad, when you start writing the packets (muxing them), you give each audio and video packet a DTS (and PTS) value. You can start at zero. At the start you give the first audio and the first video packet the same value. For every new packet you have to increase the DTS value accordingly, depending on the length of the audio or video packet before. Audio and video packets have different lengths, so you increase them using different step values. For example, you can increase the DTS value for every video packets by 4000, and for every audio packets by 2000 (you must correct these values depending on your codecs). If you use the correct step values, then at the end of your video, both audio and video DTS values should be roughly the same again. If they are not, your step value is wrong. That's all already. Works perfectly for me. > > It's weird, its as if the video is playing at twice the speed of the audio. > Looks like you do not take into consideration that an audio packet and a video packet do not have the same length! DTS is not a counter, but a time value. You do not increase by 1, but by a value, which considers the length of the packet. There are actually some formulas, which describe how you calculate usual DTS values. The important part is that the relation of the length of the different packets are mapped into these values. HtH, Regards, Kalileo From mczarnek at objectvideo.com Fri Mar 29 19:27:12 2013 From: mczarnek at objectvideo.com (Czarnek, Matt) Date: Fri, 29 Mar 2013 14:27:12 -0400 Subject: [Libav-user] How can I free data buffers missed by avcodec_free_frame? Message-ID: In the description for avcodec_free_frame, it states: "Warning: this function does NOT free the data buffers themselves" I have allocated my buffers as such: int curAVFramesize = avpicture_get_size(PIX_FMT_YUV420P, ccontext->width, ccontext->height); uint8_t* curAVFramePicBuffer = (uint8_t*)(av_malloc(curAVFramesize)); AVFrame *curAVFrame=avcodec_alloc_frame(); avpicture_fill((AVPicture *)curAVFrame,curAVFramePicBuffer, PIX_FMT_YUV420P,ccontext->width, ccontext->height); I figured that the warning meant calling 'avpicture_free' was nessecary. So I've been freeing it as: avpicture_free((AVPicture *)curAVFrame); avcodec_free_frame((AVFrame **)(&curAVFrame)); Usually my program doesn't complain but every once in a while, after calling 'avpicture_free' but before 'avcodec_free_frame' it'll throw a heap allocation error. Here is the entire function: http://pastebin.com/jHecUySU Is avpicture_free needed? Any thoughts as to what might be happening? Thank you! Matt -- Matt Czarnek, Software Engineer Work Phone: (760) 4-OBJVID aka: (760) 462-5843 Cell Phone: HAHAHOORAY ObjectVideo Inc. http://www.objectvideo.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From brado at bighillsoftware.com Fri Mar 29 21:28:05 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Fri, 29 Mar 2013 13:28:05 -0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> Message-ID: On Mar 28, 2013, at 11:53 PM, Kalileo wrote: > Hi Brad, > > when you start writing the packets (muxing them), you give each audio and video packet a DTS (and PTS) value. You can start at zero. > > At the start you give the first audio and the first video packet the same value. For every new packet you have to increase the DTS value accordingly, depending on the length of the audio or video packet before. Audio and video packets have different lengths, so you increase them using different step values. > > For example, you can increase the DTS value for every video packets by 4000, and for every audio packets by 2000 (you must correct these values depending on your codecs). > > If you use the correct step values, then at the end of your video, both audio and video DTS values should be roughly the same again. If they are not, your step value is wrong. > > That's all already. Works perfectly for me. Kalileo -- hey thanks for taking the time to respond, it is good to hear from you again. I think you are probably right on target, but I have a few wrinkles to add which have caused me to scratch my head a bit. Check these few tidbits out: - Another poster has mentioned earlier in this thread (if I understood his point accurately) that audio and video streams (timing that is) are completely unrelated in their handling. While we view these streams as single rendered product, that internally they are completely separate entities. There's kind of an issue of semantics here, but I'm not sure whether that agrees with or contradicts above what you are saying about the relationship between audio and video pts / dts. To the best of what I've been able to determine from mailing list responses, doc, and my testing, it would appear that these settings for audio don't have any material effect on settings for video and vice versa, but in viewing the output, they obviously would show sync problems if timings weren't right. This seems supported by the next several points which follow. - Here's an interesting note: it doesn't appear that pts and dts are even relevant for audio. I don't know whether that is the case across the board, or only in some specific circumstances, but I don't even have to set either value, and the audio is perfect both in the case of writing video frames as well, or if I completely turn off writing of all video frames. I've outputted the audio pts value when not setting it and it is complete junk, yet the audio is perfect. - If I completely turn off the writing of all audio frames, there is absolutely no change in video rendering -- it still renders video frames at twice the speed. This would seem to support the fact that a) pts might only be significant for video packets and not for audio, and b) there's no direct relationship between video and audio packet pts. So my next questions become the following: 1. Is setting the audio pts and dts even relevant? I've seen no functional indication that it is. 2. Is there any direct thing that the playback codecs do (other than just rendering at the proper time) to relate audio timing to video timing? There's no comparison or sequencing being done between values is there? 3. The whole setting of pts and dts is relative to the time_base configured on the codec context. According to the documentation, the time_base.num should be 1, and the time_base.den should be equal to the expected frames per second. I have both of these set accordingly. However, I got to thinking, what if you expect (I'm going to use round multiples for discussion here, I'm actually setting time_base.den to 24 fps) 30 fps, but at runtime receive only 15 fps. Will this internally have any material impact to rendering? I think this is where some of the FFmpeg code examples may be bypassing an issue common to many actual use-cases. They can virtually guarantee frame-rate and proper pts values by simply generating X frames and assigning them proper pts. But what happens when receiving these frames from an external source and frames aren't delivered at the frame rate expected? Is there some compensation that has to be done in code, or is the codec smart enough to render frames at the timings you stamp on them, regardless of whether the frame rate matches your time_base.den setting? Thanks, Brad From alexcohn at netvision.net.il Fri Mar 29 21:43:14 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Fri, 29 Mar 2013 23:43:14 +0300 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> Message-ID: > If I completely turn off the writing of all audio frames, there is absolutely no change in video rendering -- it still renders video frames at twice the speed. This may sound oversimplified, but maybe it will be OK to set the PTS to 2? the value you tried today. Note that the time base may be set separately for the container and for the video stream (depends on the format and on the codec). BR, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From brado at bighillsoftware.com Fri Mar 29 22:01:05 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Fri, 29 Mar 2013 14:01:05 -0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> Message-ID: <5020B806-DE0B-451A-A988-3683A543468F@bighillsoftware.com> On Mar 29, 2013, at 1:43 PM, Alex Cohn wrote: > Note that the time base may be set separately for the container and for the video stream (depends on the format and on the codec). Greetings Alex! Geesh...so many folks taking the time to answer, posting to the Libav-user mailing list appears to be a great way to meet some well-meaning folks! Anyway, can you expound a little further on your comment? I'm interested to know when you would do such a thing, and what the net effect is for doing so. This dovetails with another question I've had rattling around in my head, and that is the curious place where pts and dts are being set in different code examples I've come across. In some examples I've found, pts and dts are being set prior to encoding. In others, pts and dts are being set after encoding, but prior to writing the packet to an output source (file or stream). I think I've even come across an example or two which do both. Now I understand that the packet returned (if one is returned at all) from the av_encode_video2 function call might not contain the same frame data that was passed in the frame data buffer, so it leads me to believe that there might be significance both before and after encoding to a packets pts and dts values. So...when are pts / dts supposed to be set? a) before encoding b) after encoding c) both before and after encoding -- and if so, why is this? Another way to ask this same question -- is the AVPacket.pts value relevant: a) only during encoding b) only during writing to output file or stream c) both during encoding and during writing to output file or stream -- and again, I'm curious if this is the case, why the AVPacket returned by the av_encode_video2 function hasn't itself set the pts value properly -- if I'm receiving frames arbitrarily from a capture source, and the av_encode_video2 function is returning out a frame that isn't necessarily the same one I sent it, how would I necessarily know what the pts value should be, other than a calculation based on the frame rate (which puts me back in the boat of having to reconcile the frame rate expected vs. what is actually received)? Thx....:-) Brad From kalileo at universalx.net Fri Mar 29 22:12:16 2013 From: kalileo at universalx.net (Kalileo) Date: Sat, 30 Mar 2013 04:12:16 +0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> Message-ID: On Mar 30, 2013, at 03:28 , Brad O'Hearne wrote: > On Mar 28, 2013, at 11:53 PM, Kalileo wrote: >> Hi Brad, >> >> when you start writing the packets (muxing them), you give each audio and video packet a DTS (and PTS) value. You can start at zero. >> >> At the start you give the first audio and the first video packet the same value. For every new packet you have to increase the DTS value accordingly, depending on the length of the audio or video packet before. Audio and video packets have different lengths, so you increase them using different step values. >> >> For example, you can increase the DTS value for every video packets by 4000, and for every audio packets by 2000 (you must correct these values depending on your codecs). >> >> If you use the correct step values, then at the end of your video, both audio and video DTS values should be roughly the same again. If they are not, your step value is wrong. >> >> That's all already. Works perfectly for me. > > Kalileo -- hey thanks for taking the time to respond, it is good to hear from you again. I think you are probably right on target, but I have a few wrinkles to add which have caused me to scratch my head a bit. Check these few tidbits out: > > - Another poster has mentioned earlier in this thread (if I understood his point accurately) that audio and video streams (timing that is) are completely unrelated in their handling. While we view these streams as single rendered product, that internally they are completely separate entities. Correct. > There's kind of an issue of semantics here, but I'm not sure whether that agrees with or contradicts above what you are saying about the relationship between audio and video pts / dts. No contradiction. > To the best of what I've been able to determine from mailing list responses, doc, and my testing, it would appear that these settings for audio don't have any material effect on settings for video and vice versa, Correct, except that they are used for syncing. > but in viewing the output, they obviously would show sync problems if timings weren't right. That's what I try to tell you, the length (time) is what you have to set using DTS/PTS, where same DTS means "play at the same time" > This seems supported by the next several points which follow. > > - Here's an interesting note: it doesn't appear that pts and dts are even relevant for audio. I don't know whether that is the case across the board, or only in some specific circumstances, but I don't even have to set either value, and the audio is perfect both in the case of writing video frames as well, or if I completely turn off writing of all video frames. I've outputted the audio pts value when not setting it and it is complete junk, yet the audio is perfect. Depends on your Player. In the case you describe the audio is the "master", and it just plays, one packet after the other. Audio packets do have a specific length, so that's working fine without additional timing info. > > - If I completely turn off the writing of all audio frames, there is absolutely no change in video rendering -- it still renders video frames at twice the speed. What Player are you using, what player shows that behavior? > This would seem to support the fact that a) pts might only be significant for video packets and not for audio, and Not correct. You can take the video timing as the master, and speed up / slow down the audio to follow the video. > b) there's no direct relationship between video and audio packet pts. Not correct. The relationship is the timing, the length. Same PTS means this video and this audio should be played at the same time. > > So my next questions become the following: > > 1. Is setting the audio pts and dts even relevant? I've seen no functional indication that it is. If you do not need your audio and video to stay in sync then yes, not relevant. However most Players will think that you want it to be in sync, so setting nonsense values will give you funny results. > > 2. Is there any direct thing that the playback codecs do (other than just rendering at the proper time) to relate audio timing to video timing? There's no comparison or sequencing being done between values is there? The codecs don't do any syncing. It's the Player which does take care of syncing. DTS/PTS is what helps the player doing that. > > 3. The whole setting of pts and dts is relative to the time_base configured on the codec context. According to the documentation, the time_base.num should be 1, and the time_base.den should be equal to the expected frames per second. I have both of these set accordingly. However, I got to thinking, what if you expect (I'm going to use round multiples for discussion here, I'm actually setting time_base.den to 24 fps) 30 fps, but at runtime receive only 15 fps. Will this internally have any material impact to rendering? I think this is where some of the FFmpeg code examples may be bypassing an issue common to many actual use-cases. They can virtually guarantee frame-rate and proper pts values by simply generating X frames and assigning them proper pts. But what happens when receiving these frames from an external source and frames aren't delivered at the frame rate expected? Is there some compensation that has to be done in code, Yes, check the DTS/PTS of audio and video and slow down one or speed up the other when they drift apart. That's the job of a player. > or is the codec smart enough to render frames at > the timings you stamp on them, regardless of whether the frame rate matches your time_base.den setting? I don't know why you keep thinking that the codec cares about the time when to render. You give the codec something to decode, or demux, and it does it, as fast as it can. Rendering/displaying happens after that, not by the codec but by some player code, and up to the player to make sure it keeps all in sync. You might want to study some examples based on the old Dranger's tutorial, where that stuff is explained in much better words than mine. Regards, Kalileo From alexcohn at netvision.net.il Fri Mar 29 22:25:12 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sat, 30 Mar 2013 00:25:12 +0300 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: <5020B806-DE0B-451A-A988-3683A543468F@bighillsoftware.com> References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> <5020B806-DE0B-451A-A988-3683A543468F@bighillsoftware.com> Message-ID: I am sorry I cannot write a detailed answer right now from my phone. Basically, the pts you set before encoding should be the right one, because you have little control over the delay introduced by the encoder. But this depends on the codec and on the container. I beg to differ from Kalileo in one aspect: the codec may care about pts, but on the other hand it may not. Anyways, theoretical background is important and very interesting for a curious mind, but if different players employ different logic to compensate imperfect input files and imperfect specs, so it's worth to use some trial-and-error. Sincerely, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From brado at bighillsoftware.com Fri Mar 29 22:49:25 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Fri, 29 Mar 2013 14:49:25 -0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> Message-ID: <46153253-4986-4947-8CDD-4C2EA9664133@bighillsoftware.com> On Mar 29, 2013, at 2:12 PM, Kalileo wrote: All of the below really helps my understanding...I think a few more things I need to know to fill in the gaps: >> To the best of what I've been able to determine from mailing list responses, doc, and my testing, it would appear that these settings for audio don't have any material effect on settings for video and vice versa, > > Correct, except that they are used for syncing. Ok, is this a logical syncing (how synced video and audio appear to the user when played, i.e. an independent audio player just following audio pts while an independent video player following video pts) or a literal syncing -- i.e. the player is doing a direct comparison of the video pts and audio pts values to determine sequencing. > Depends on your Player. In the case you describe the audio is the "master", > Not correct. You can take the video timing as the master What determines whether audio or video is the "master"? Is this something I need to specify in output format context, codec, or stream configuration? >> >> - If I completely turn off the writing of all audio frames, there is absolutely no change in video rendering -- it still renders video frames at twice the speed. > > What Player are you using, what player shows that behavior? My output format is an FLV file. Once I run my app and output an FLV file to my desktop, I've tried in both VLC and Wondershare Video Converter Ultimate -- same result. > Not correct. The relationship is the timing, the length. Same PTS means this video and this audio should be played at the same time. Again, might be semantics, but you may have hit on the core of my problem. Each of the video codec context and audio codec context have their own configured time bases. The documentation for pts in the AVCodecCtx reads as follows: * This is the fundamental unit of time (in seconds) in terms * of which frame timestamps are represented. For fixed-fps content, * timebase should be 1/framerate and timestamp increments should be * identically 1. That's obviously analog for the video codec context. But it doesn't mention anything about how to set this value for the audio codec context. I was tempted initially to set the audio codec context's time_base.den to the video frame rate (even though it is audio), but I ruled that out, for two reasons: 1. You can encode audio when there is no video stream and therefore no frame rate, so it would seem that if pts were important for audio, it would need to have some logical time_base, when there was no frame_rate. 2. Once again, this is a hole in the examples given in muxing.c. It doesn't set the time_base on the audio codec context at all, so this is one of those threads of info that lead me to question, along with the fact that audio encoding and video encoding are completely separate operations and entities (which I read as being that their respective codec contexts were also separate entities, therefore not sharing a time_base value) whether pts for audio was even relevant or not. The logical alternative was to assign the audio codec context a time_base.den of the audio sample rate (44100). It sounds extremely plausible that this would be the relative culprit when audio and video are both in play, and as I stated prior, I suspected there was a relative mismatch, so I disabled audio entirely and was surprised to discover it had no material impact on the video at all. I think I'm close....thanks again for the discussion, its really helping to shore up my understanding of how these things work. Brad From diego.acevedo at ttu.edu Fri Mar 29 16:25:29 2013 From: diego.acevedo at ttu.edu (Acevedo, Diego) Date: Fri, 29 Mar 2013 15:25:29 +0000 Subject: [Libav-user] Double free/corruption problem. Message-ID: <973CDF2B2C73264D8877A54B69E645110AD07C8A@cyclops03.ttu.edu> I am having the same problem where I can not free my buffer after I allocate my custom AVIOContext. Did anyone find a solution to this? Any help would be very thankful. , Diego From lars.hammarstrand at gmail.com Sat Mar 30 00:32:33 2013 From: lars.hammarstrand at gmail.com (Lars Hammarstrand) Date: Sat, 30 Mar 2013 00:32:33 +0100 Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? Message-ID: Hi! We (at xbmc) are having problems with the new ffmpeg n1.2 libs on IOS (ref: XBMC work in Progress FFmpeg v1.1 ... ). I did a test drive with https://github.com/kolyvan/kxmovie but it seems to suffer from the same problem. I was wondering if someone are aware of a working app with ffmpeg n1.2 on ios that we can use as a reference? -- Thanks in advance! Regards, Lars. Btw, here is brief description of the problem: 1. XBMC is stopping at: *ff_pred8x8_128_dc_neon*: (libavcodec/arm/h264pred_neon.S) 0x5bc84: cdpeq p15, #5, c15, c0, c0, #4 *<-- Thread 3: EXC_BAD_INSTRUCTION (code=EXC_ARM_UNDEFINED, subcode=0xe50ff80)* 0x5bc88: svclt #57436 2. The problem originates from libavcodec/h264_mb_template.c Code: 160: if (SIMPLE || !CONFIG_GRAY || !(h->flags & CODEC_FLAG_GRAY)) { 161: h->hpc.pred8x8[h->chroma_pred_mode](dest_cb, uvlinesize); <-- Crash - Thread 18 CDVDPlayer: EXC_BAD_INSTRUCTION 162: h->hpc.pred8x8[h->chroma_pred_mode](dest_cr, uvlinesize); 163: } 3. Stack trace: #0 0x01197c18 in *ff_pred8x8_128_dc_neon *at libavcodec/arm/h264pred_neon.S:405 #1 0x0121f62c in hl_decode_mb_simple_8 at libavcodec/h264_mb_template.c:161 #2 0x01218266 in ff_h264_hl_decode_mb at libavcodec/h264.c:2415 #3 0x01225032 in decode_slice at libavcodec/h264.c:4207 #4 0x01224ddc in execute_decode_slices at libavcodec/h264.c:4357 #5 0x012174ce in decode_nal_units at libavcodec/h264.c:4701 #6 0x01221024 in decode_frame at libavcodec/h264.c:4813 #7 0x0136e252 in avcodec_decode_video2 at libavcodec/utils.c:1690 #8 0x0143675c in try_decode_frame at libavformat/utils.c:2562 #9 0x01434b1a in avformat_find_stream_info at libavformat/utils.c:2994 4. Definition of ff_pred8x8_128_dc_neon (libavcodec/arm/h264pred_neon.S): function ff_pred8x8_128_dc_neon, export=1 vmov.i8 q0, #128 b .L_pred8x8_dc_end endfunc -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tksharpless at gmail.com Sat Mar 30 04:55:28 2013 From: tksharpless at gmail.com (Thomas Sharpless) Date: Fri, 29 Mar 2013 23:55:28 -0400 Subject: [Libav-user] makefile problem on win32 MinGW Message-ID: I have built older versions of ffmpeg on my Windows system using MinGW, but the latest snapshot refuses to build. After a bit of fuss I got a configure command to (almost) run to completion: $ ./configure --prefix=.. --enable-gpl --enable-version3 --disable-programs > --d > isable-doc --enable-libx264 --extra-ldflags="-L ../lib" --extra-cflags="-I > ../i > nclude" --extra-libs=/mingw/lib/libpthread.dll.a > The last thing configure prints is License: GPL version 3 or later > Creating config.mak and config.h... > ./configure: line 4652: git: command not found > ./configure: line 4652: git: command not found > The very last part of the configure script fails but it looks to me like all it is trying to do is make sure the source code is totally current. This is because I cloned the git repo with a windows utility and there is no git installed in my MinGW. But then, horrors! $ make > common.mak:139: *** missing separator. Stop. > That bit of common.mak reads (last line is 139): > > define RULES > clean:: > $(RM) $(OBJS) $(OBJS:.o=.d) > $(RM) $(HOSTPROGS) > $(RM) $(TOOLS) > endef > > $(eval $(RULES)) > > This is not due to any obvious missing tab character in common.mak. Can anyone diagnose? I really need this build! --Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexcohn at netvision.net.il Sat Mar 30 05:05:23 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sat, 30 Mar 2013 07:05:23 +0300 Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? In-Reply-To: References: Message-ID: On 30 Mar 2013 02:32, "Lars Hammarstrand" wrote: > 1. XBMC is stopping at: > ff_pred8x8_128_dc_neon: (libavcodec/arm/h264pred_neon.S) > 0x5bc84: cdpeq p15, #5, c15, c0, c0, #4 <-- Thread 3: EXC_BAD_INSTRUCTION (code=EXC_ARM_UNDEFINED, subcode=0xe50ff80) > 0x5bc88: svclt #57436 Not all iOS devices are born equal in terms of their ARM core. Which device crashed for you with bad instruction at > vmov.i8 q0, #128 BR, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From kalileo at universalx.net Sat Mar 30 05:10:34 2013 From: kalileo at universalx.net (Kalileo) Date: Sat, 30 Mar 2013 11:10:34 +0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> <5020B806-DE0B-451A-A988-3683A543468F@bighillsoftware.com> Message-ID: <118FA509-CD85-40EB-BD19-408975475088@universalx.net> On Mar 30, 2013, at 04:25 , Alex Cohn wrote: > I am sorry I cannot write a detailed answer right now from my phone. Basically, the pts you set before encoding should be the right one, because you have little control over the delay introduced by the encoder. But this depends on the codec and on the container. > > I beg to differ from Kalileo in one aspect: the codec may care about pts, but on the other hand it may not. > > We do not differ here. Codec as in decoder for video or audio, i don't see that, (except rearranging the order of video frames based on PTS as in x264 containing b-frames, if needed), but codec as in muxer to write the combined video, here it will check if the DTS/PTS values you provide are somewhat plausible. Then again, I'v only worked with a few codecs, x264, aac, mpegts, so my view of the codec world might be quite limited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexcohn at netvision.net.il Sat Mar 30 05:22:46 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sat, 30 Mar 2013 07:22:46 +0300 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: <118FA509-CD85-40EB-BD19-408975475088@universalx.net> References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> <5020B806-DE0B-451A-A988-3683A543468F@bighillsoftware.com> <118FA509-CD85-40EB-BD19-408975475088@universalx.net> Message-ID: On 30 Mar 2013 07:11, "Kalileo" wrote: > > On Mar 30, 2013, at 04:25 , Alex Cohn wrote: >> >> I beg to differ from Kalileo in one aspect: the codec may care about pts, but on the other hand it may not. > > We do not differ here. ;-) > Codec as in decoder for video or audio, i don't see that, (except rearranging the order of video frames based on PTS as in x264 containing b-frames, if needed), This is the one scenario which taught me the lesson above, but there may be others. The key word is *may*. BR, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From kalileo at universalx.net Sat Mar 30 05:55:28 2013 From: kalileo at universalx.net (Kalileo) Date: Sat, 30 Mar 2013 11:55:28 +0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: <46153253-4986-4947-8CDD-4C2EA9664133@bighillsoftware.com> References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> <46153253-4986-4947-8CDD-4C2EA9664133@bighillsoftware.com> Message-ID: On Mar 30, 2013, at 04:49 , Brad O'Hearne wrote: > On Mar 29, 2013, at 2:12 PM, Kalileo wrote: > > All of the below really helps my understanding...I think a few more things I need to know to fill in the gaps: >>> To the best of what I've been able to determine from mailing list responses, doc, and my testing, it would appear that these settings for audio don't have any material effect on settings for video and vice versa, >> >> Correct, except that they are used for syncing. > > Ok, is this a logical syncing (how synced video and audio appear to the user when played, i.e. an independent audio player just following audio its Why? If you play audio only, you do not need to sync with anything, DTS/PTS is not really needed, the length of an audio packet is determined already by its content. When one packet has finished playing the next is played. > while an independent video player following video pts) or a literal syncing -- i.e. the player is doing a direct comparison of the video pts and audio pts values to determine sequencing. > >> Depends on your Player. In the case you describe the audio is the "master", > >> Not correct. You can take the video timing as the master > > What determines whether audio or video is the "master"? Is this something I need to specify in output format context, codec, or stream configuration? The player can do as it pleases = whoever coded the player decided how to handle that. There might be options to tell the player what you prefer. > >>> >>> - If I completely turn off the writing of all audio frames, there is absolutely no change in video rendering -- it still renders video frames at twice the speed. >> >> What Player are you using, what player shows that behavior? > > My output format is an FLV file. Once I run my app and output an FLV file to my desktop, I've tried in both VLC and Wondershare Video Converter Ultimate -- same result. You give the player wrong info. You apparently missed what Alex told you earlier: On Mar 30, 2013, at 03:43 , Alex Cohn wrote: > This may sound oversimplified, but maybe it will be OK to set the PTS to 2? the value you tried today. What I think you are still missing is the fact that audio packets have a fixed length. For every audio packet you can calculate how long it is (using audio sample rate, channels count, size of the (decoded) data). So an audio packet contains audio for exactly x ms. Video does not have that info built in that strongly, You can show a the image is correct wether displayed 1 ms or 100 ms. To decide how long to show it the player can use fps, which is much less precise than audio sample rate. And it might be not only fps, but also time base and ticks to consider. If you play audio and video together, you will find that - despite playing each correctly - they drift apart after a while, and you can correct that drift by checking the DTS/PTS of each stream, and slowing down one or speeding up the other accordingly. Up tp the player to decide which one to correct. Please do, as already suggested, some reading on the basics: On Mar 30, 2013, at 04:12 , Kalileo wrote: > You might want to study some examples based on the old Dranger's tutorial, where that stuff is explained in much better words than mine. It's too long ago that I read them, so I cannot give you a link, but they helped me a lot to get started. Google for Dranger. You wouldn't need to ask all these questions if you already read that stuff. Another thing you can do is to take a stream which plays correctly, and analyze the dts values of audio and video used there. This might show you right away where you're different. From lars.hammarstrand at gmail.com Sat Mar 30 07:18:14 2013 From: lars.hammarstrand at gmail.com (Lars Hammarstrand) Date: Sat, 30 Mar 2013 07:18:14 +0100 Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? In-Reply-To: References: Message-ID: 2013/3/30 Alex Cohn > On 30 Mar 2013 02:32, "Lars Hammarstrand" > wrote: > > 1. XBMC is stopping at: > > ff_pred8x8_128_dc_neon: (libavcodec/arm/h264pred_neon.S) > > 0x5bc84: cdpeq p15, #5, c15, c0, c0, #4 <-- Thread 3: > EXC_BAD_INSTRUCTION (code=EXC_ARM_UNDEFINED, subcode=0xe50ff80) > > 0x5bc88: svclt #57436 > > Not all iOS devices are born equal in terms of their ARM core. Which > device crashed for you with bad instruction at > > > vmov.i8 q0, #128 > > BR, > Alex > Ipad 1st gen, ios 5.0. FFmpeg libs built with the following config flags: # ./configure --target-os=darwin --disable-muxers --disable-encoders --disable-devices --disable-doc --disable-ffplay --disable-ffmpeg --disable-ffprobe --disable-ffserver --disable-vda --disable-crystalhd --disable-decoder=mpeg_xvmc --cpu=cortex-a8 --arch=arm --enable-cross-compile --enable-pic --disable-armv5te --disable-armv6t2 --enable-neon --disable-libvorbis --enable-gpl --enable-postproc --enable-static --enable-pthreads --enable-muxer=spdif --enable-muxer=adts --enable-encoder=ac3 --enable-encoder=aac --enable-protocol=http --enable-runtime-cpudetect --cc=clang --as='/Users/Shared/xbmc-depends/buildtools-native/bin/gas-preprocessor.pl/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/llvm-gcc-4.2' --extra-cflags='-O2 -mcpu=cortex-a8 -mfpu=neon -ftree-vectorize -mfloat-abi=softfp -pipe -Wno-trigraphs -fpascal-strings -Wreturn-type -Wunused-variable -fmessage-length=0 -gdwarf-2 -arch armv7 -miphoneos-version-min=4.2 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS6.1.sdk -I/Users/Shared/xbmc-depends/iphoneos6.1_armv7-target/include -mcpu=cortex-a8 -mfpu=neon -ftree-vectorize -mfloat-abi=softfp -pipe -Wno-trigraphs -fpascal-strings -Wreturn-type -Wunused-variable -fmessage-length=0 -gdwarf-2 -arch armv7 -miphoneos-version-min=4.2 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS6.1.sdk -I/Users/Shared/xbmc-depends/iphoneos6.1_armv7-target/include -O3 -g -D_DEBUG -Wall -w -D_DARWIN_C_SOURCE -Dattribute_deprecated=' -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexcohn at netvision.net.il Sat Mar 30 07:23:46 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sat, 30 Mar 2013 09:23:46 +0300 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> <46153253-4986-4947-8CDD-4C2EA9664133@bighillsoftware.com> Message-ID: On 30 Mar 2013 07:56, "Kalileo" wrote: > Another thing you can do is to take a stream which plays correctly, and analyze the dts values of audio and video used there. This might show you right away where you're different. This is rarely practical: there are multiple ways to construct an FLV movie that will play correctly in VLC, and there is no easy way to find which subtle difference causes the digression. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Sat Mar 30 07:29:14 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Sat, 30 Mar 2013 06:29:14 +0000 (UTC) Subject: [Libav-user] makefile problem on win32 MinGW References: Message-ID: Thomas Sharpless writes: > ./configure: line 4652: git: command not found > > The very last part of the configure script fails but > it looks to me like all it is trying to do is make > sure the source code is totally current. It looks if the repository you are using is current, you don't have to worry about the warnings. > But then, horrors! > $ makecommon.mak:139: *** missing separator.? Stop. This indicates that your checkout is broken, use: $ git config --global core.autocrlf false and checkout again. Carl Eugen From cehoyos at ag.or.at Sat Mar 30 07:47:32 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Sat, 30 Mar 2013 06:47:32 +0000 (UTC) Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? References: Message-ID: Lars Hammarstrand writes: > > Not all iOS devices are born equal in terms of > > their ARM core. Which device crashed for you with > > bad instruction at > > > ???????vmov.i8???????? q0,??#128 This function has been unchanged for three years. If this really is a regression, please consider using git bisect to find the version introducing it. (This isn't trivial with FFmpeg, but I will support you.) > --extra-cflags='-O2 [...] This is probably unrelated, but could you explain why you are adding (most) options? Some of them obviously make no sense, the paths are of course needed, for the remaining ones, I'd like to ask if they fix any problems? If yes, this should be fixed in configure, don't you agree? Carl Eugen From brado at bighillsoftware.com Sat Mar 30 08:05:27 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Sat, 30 Mar 2013 00:05:27 -0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> <46153253-4986-4947-8CDD-4C2EA9664133@bighillsoftware.com> Message-ID: <51BD14AE-0E40-4D2C-8565-7B17A6FE967B@bighillsoftware.com> On Mar 29, 2013, at 11:23 PM, Alex Cohn wrote: > On 30 Mar 2013 07:56, "Kalileo" wrote: > > Another thing you can do is to take a stream which plays correctly, and analyze the dts values of audio and video used there. This might show you right away where you're different. > > This is rarely practical: there are multiple ways to construct an FLV movie that will play correctly in VLC, and there is no easy way to find which subtle difference causes the digression. Well, here's the rub -- thanks to QTKit, and the QTSampleBuffer it delivers for both video and audio, I don't have to calculate pts, dts, or duration -- those time values are already delivered with the data buffer, along with its associated time scale, so converting to time_base units is merely a simple math problem. However, using the units (and I've verified in the console log that these values are all sequential and ascending as expected) it still isn't right. It is closer, and the audio still seems perfect, but the audio still seems to play just a bit two fast, cutting down something like 2 seconds off a 12 second video. Questions: 1. I'm still not completely clear on the needed time_base.den value for the audio codec context -- should that be the same as the time_base.den value for the video codec context (which is essentially the video frame rate) or something else? Like I said, the muxing.c example doesn't appear to set this at all, so pts values have to be conforming to some scale. 2. One thing I find really interesting is that all of these time_base units, pts, dts, and duration are integral. If video / audio timing needs exactness, why aren't these things floats for the purposes of finer-grained precision? 3. Assuming for a moment that pts and dts settings are right, is there any other possible factor that can throw off timing even with setting pts / dts values given directly determined from capture? I encountered one code sample this past week which was different from all others I had looked at -- after the av_write_frame call, it followed with a loop to flush "delayed frames" feeding the encoding a NULL data buffer (no pts set on the packet though). Is it possible that there are frames somehow not making out of the encoder? Brad From lars.hammarstrand at gmail.com Sat Mar 30 11:49:45 2013 From: lars.hammarstrand at gmail.com (Lars Hammarstrand) Date: Sat, 30 Mar 2013 11:49:45 +0100 Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? In-Reply-To: References: Message-ID: 2013/3/30 Carl Eugen Hoyos > Lars Hammarstrand writes: > > > > Not all iOS devices are born equal in terms of > > > their ARM core. Which device crashed for you with > > > bad instruction at > > > > vmov.i8 q0, #128 > > This function has been unchanged for three years. > > If this really is a regression, please consider > using git bisect to find the version introducing > it. (This isn't trivial with FFmpeg, but I will > support you.) > > Ok, thanks - although I believe it will be a quite lengthy and cumbersome process, as xbmc currently is based on ffmpeg version 0.10.2 ;-) What do you say about regression testing the binary way and start somewhere in the middle. What version do you suggest we start with? This is probably unrelated, but could you explain > why you are adding (most) options? > Some of them obviously make no sense, the paths > are of course needed, for the remaining ones, > I'd like to ask if they fix any problems? If > yes, this should be fixed in configure, don't > you agree? > > Carl Eugen > > Agreed, it would be nice to clean up and use only the necessary flags. I suspect many of the settings are a legacy from the old 0.10.2 days. -- Regards, Lars. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rjvbertin at gmail.com Sat Mar 30 12:16:34 2013 From: rjvbertin at gmail.com (=?iso-8859-1?Q?=22Ren=E9_J=2EV=2E_Bertin=22?=) Date: Sat, 30 Mar 2013 12:16:34 +0100 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> <46153253-4986-4947-8CDD-4C2EA9664133@bighillsoftware.com> Message-ID: <1CD37ACA-6B93-4E3F-9997-89FE9C8B33CE@gmail.com> On Mar 30, 2013, at 05:55, Kalileo wrote: > What I think you are still missing is the fact that audio packets have a fixed length. For every audio packet you can calculate how long it is (using audio sample rate, channels count, size of the (decoded) data). So an audio packet contains audio for exactly x ms. > > Video does not have that info built in that strongly, You can show a the image is correct wether displayed 1 ms or 100 ms. To decide What's confusing here is that for your audio claim to hold true, one needs at least 3 bits of information in addition to the sample's byte length (which is a given, after decoding): sample rate, sample size (short, int, float, ...) and channel count. For a video frame, the minimum requirement is to know width and height in order to map a buffer to an image, but there is no reason a packet could not include additional information. After all, even if duration is a necessary info bit for audio, one could argue that this is the case for image data too, in a video context. > Quoth Brad: > Well, here's the rub -- thanks to QTKit, and the QTSampleBuffer it delivers for both video and audio, I don't have to calculate pts, dts, or duration -- those time values are already delivered with the data buffer, along with its associated time scale, so converting to time_base units is merely a simple math problem. Converting with simple math isn't calculating? :) Seriously, why not post the information provided by QTKit, and how you convert it? Seems it could be quite easy to confound QT's time scale and FFmpeg's time_base units? R. From lars.hammarstrand at gmail.com Sat Mar 30 12:17:15 2013 From: lars.hammarstrand at gmail.com (Lars Hammarstrand) Date: Sat, 30 Mar 2013 12:17:15 +0100 Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? In-Reply-To: References: Message-ID: Does anyone knows if there is a working version of VLC for iOS (iphone/ipad) based on ffmpeg v1.1 or later? (googled quite a bit but couldn't find any useful info...) . /Lars. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at mikeversteeg.com Sat Mar 30 14:22:53 2013 From: mike at mikeversteeg.com (Mike Versteeg) Date: Sat, 30 Mar 2013 14:22:53 +0100 Subject: [Libav-user] Best way to create a fractional ms timer? Message-ID: Are there any functions in libav* available that can help me create a fractional ms timer, say 33.33 ms? Windows is not offering this, and one can only achieve this by adding code to the timer event, code that can and will be interrupted by the scheduler. I prefer something more robust.. Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From tksharpless at gmail.com Sat Mar 30 15:34:52 2013 From: tksharpless at gmail.com (Thomas Sharpless) Date: Sat, 30 Mar 2013 10:34:52 -0400 Subject: [Libav-user] makefile problem on win32 MinGW In-Reply-To: References: Message-ID: Thanks Carl I solved the problem by replacing the git clone with a release tarball -- I have no need to follow the latest snapshot. -- Tom On Sat, Mar 30, 2013 at 2:29 AM, Carl Eugen Hoyos wrote: > Thomas Sharpless writes: > > > ./configure: line 4652: git: command not found > > > > The very last part of the configure script fails but > > it looks to me like all it is trying to do is make > > sure the source code is totally current. > > It looks if the repository you are using is > current, you don't have to worry about the warnings. > > > But then, horrors! > > $ makecommon.mak:139: *** missing separator. Stop. > > This indicates that your checkout is broken, use: > $ git config --global core.autocrlf false > and checkout again. > > Carl Eugen > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Sat Mar 30 15:45:46 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Sat, 30 Mar 2013 14:45:46 +0000 (UTC) Subject: [Libav-user] makefile problem on win32 MinGW References: Message-ID: Thomas Sharpless writes: > I solved the problem by replacing the git clone > with a release tarball -- I have no need to follow > the latest snapshot. Please understand that this is not suggested unless you are a distributor. (The download page also offers a tarball of the latest snapshot.) Please do not top-post here, Carl Eugen From cehoyos at ag.or.at Sat Mar 30 15:50:54 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Sat, 30 Mar 2013 14:50:54 +0000 (UTC) Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? References: Message-ID: Lars Hammarstrand writes: > If this really is a regression, please consider > using git bisect to find the version introducing > it. (This isn't trivial with FFmpeg, but I will > support you.) > > > Ok, thanks - although I believe it will be a quite > lengthy and cumbersome process It usually takes me <15 minutes, but arguably I know what to do. As said, I can help you. > as xbmc currently is based on ffmpeg version 0.10.2 I may misunderstand: Is the crash only reproducible with xbmc and not with FFmpeg? If yes, the bisect will probably not help because the reason is something like a unsufficiently aligned buffer within xbmc. If the crash is reproducible with FFmpeg, you don't need xbmc to test. > What do you say about regression testing the binary > way and start somewhere in the middle. What version > do you suggest we start with? The optimal version to start is of course the one you know is working fine (if I suggested one "in the middle" it would not save you more than one compile). If you know that 0.10.2 is working, you could start with 3c5fe5b (that will save you one compile). Carl Eugen From rotovnik.tomaz at gmail.com Sat Mar 30 16:37:26 2013 From: rotovnik.tomaz at gmail.com (=?ISO-8859-2?Q?Toma=BE_Rotovnik?=) Date: Sat, 30 Mar 2013 16:37:26 +0100 Subject: [Libav-user] libvorbis encoder problem Message-ID: Hi I checked *doc/examples/muxingc.c* file where an example generating MPEG AV file is explained. Then I try to change AV format to "*webm*" (video is encoded with VP8 and audio with vorbis encoder). I have problems with setting the right parameters for vorbis encoder. When I debugging through the code I figured out that vorbis only accepts (AV_SAMPLE_FMT_FLTP) sample format type. Example was done with (AV_SAMPLE_FMT_S16). I looked into source code to figure out that when I call ** ** *av**codec_fill_audio_frame(AVFrame, channels, sample_format, buffer, buf_size, align)*, in case of AV_SAMPLE_FMT_FLTP the buffer should be type *float** and audio samples should have values between -1.0 to 1.0. For more than one channel values are *not interleaved *(FLTP - float plain) but they are followed by each other (array: all values for channel0, all values for channel 1, ...). When I accept those changes in my code, unfortunately I still don't get correct result. If I use mono (1 channel only) then the flag (got_packet) returned from function *avcodec_encode_audio2* is set only once (after around 5 consecutive calls), with *AVPacket->pts* timestamp set to some huge values. Because of that only video is encoded. When I set stereo mode I get error from function *av_interleaved_write_frame * (-12). I tested the same code and setting AV format to "*asf*", where audio is encoded with WMA2 encoder and also accepts AV_SAMPLE_FMT_FLTP sample format type. I got correct AV file which can be played with VLC player or Windows media player. I think I still need to set some flags for vorbis encoder, but I can't figure out. I would appreciate any suggestions. Best regards For audio encoder I set those parameters: c->sample_fmt = AV_SAMPLE_FMT_FLTP; c->sample_rate = 44100; c->channels = 1; My code to prepare samples in AV_SAMPLE_FMT_FLTP sample format: void TT_Transcode::get_audio_frame_fltp(float *fsamples, int frame_size, intnb_channels) { int j, i; float v; float *q; for (j = 0; j < frame_size; j++) { v = (sin(t) * 10000.0) / 32767.0f; //values between -1.0 and +1.0 fsamples[ j ] = v; for (i = 1; i < nb_channels; i++) { fsamples[i * in_linesize + j] = v; } t += tincr; tincr += tincr2; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From rotovnik.tomaz at gmail.com Sat Mar 30 17:12:04 2013 From: rotovnik.tomaz at gmail.com (=?ISO-8859-2?Q?Toma=BE_Rotovnik?=) Date: Sat, 30 Mar 2013 17:12:04 +0100 Subject: [Libav-user] libvorbis encoder problem In-Reply-To: References: Message-ID: > Hi > > I checked *doc/examples/muxingc.c* file where an example generating MPEG > AV file is explained. Then I try to change AV format to "*webm*" (video > is encoded with VP8 and audio with vorbis encoder). I have problems with > setting the right parameters for vorbis encoder. When I debugging through > the code I figured out that vorbis only accepts (AV_SAMPLE_FMT_FLTP) sample > format type. Example was done with (AV_SAMPLE_FMT_S16). I looked into > source code to figure out that when I call > > *av**codec_fill_audio_frame(AVFrame, channels, sample_format, buffer, > buf_size, align)*, > > in case of AV_SAMPLE_FMT_FLTP the buffer should be type *float** and > audio samples should have values between -1.0 to 1.0. For more than one > channel values are *not interleaved *(FLTP - float plain) but they are > followed by each other (array: all values for channel0, all values for > channel 1, ...). > > When I accept those changes in my code, unfortunately I still don't get > correct result. If I use mono (1 channel only) then the flag (got_packet) > returned from function *avcodec_encode_audio2* is set only once (after > around 5 consecutive calls), with *AVPacket->pts* timestamp set to some > huge values. Because of that only video is encoded. > When I set stereo mode I get error from function * > av_interleaved_write_frame* (-12). > > I tested the same code and setting AV format to "*asf*", where audio is > encoded with WMA2 encoder and also accepts AV_SAMPLE_FMT_FLTP sample format > type. I got correct AV file which can be played with VLC player or Windows > media player. > > I think I still need to set some flags for vorbis encoder, but I can't > figure out. I would appreciate any suggestions. > > Best regards > > For audio encoder I set those parameters: > > c->sample_fmt = AV_SAMPLE_FMT_FLTP; > c->sample_rate = 44100; > c->channels = 1; > > My code to prepare samples in AV_SAMPLE_FMT_FLTP sample format: > > void TT_Transcode::get_audio_frame_fltp(float *fsamples, int frame_size, > int nb_channels) > { > int j, i; > float v; > float *q; > for (j = 0; j < frame_size; j++) { > v = (sin(t) * 10000.0) / 32767.0f; //values between -1.0 and +1.0 > fsamples[ j ] = v; > for (i = 1; i < nb_channels; i++) > { > fsamples[i * in_linesize + j] = v; > } > t += tincr; > tincr += tincr2; > } > } > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomaz at teletech.si Sat Mar 30 17:33:12 2013 From: tomaz at teletech.si (=?iso-8859-2?Q?Toma=BE_Rotovnik?=) Date: Sat, 30 Mar 2013 17:33:12 +0100 Subject: [Libav-user] libvorbis encoder problem Message-ID: <2E633BCADC834932A412A3F75CDEF37E@biotel> Hi I checked doc/examples/muxingc.c file where an example generating MPEG AV file is explained. Then I try to change AV format to "webm" (video is encoded with VP8 and audio with vorbis encoder). I have problems with setting the right parameters for vorbis encoder. When I debugging through the code I figured out that vorbis only accepts (AV_SAMPLE_FMT_FLTP) sample format type. Example was done with (AV_SAMPLE_FMT_S16). I looked into source code to figure out that when I call avcodec_fill_audio_frame(AVFrame, channels, sample_format, buffer, buf_size, align), in case of AV_SAMPLE_FMT_FLTP the buffer should be type float* and audio samples should have values between -1.0 to 1.0. For more than one channel values are not interleaved (FLTP - float plain) but they are followed by each other (array: all values for channel0, all values for channel 1, ...). When I accept those changes in my code, unfortunately I still don't get correct result. If I use mono (1 channel only) then the flag (got_packet) returned from function avcodec_encode_audio2 is set only once (after around 5 consecutive calls), with AVPacket->pts timestamp set to some huge values. Because of that only video is encoded. When I set stereo mode I get error from function av_interleaved_write_frame (-12). I tested the same code and setting AV format to "asf", where audio is encoded with WMA2 encoder and also accepts AV_SAMPLE_FMT_FLTP sample format type. I got correct AV file which can be played with VLC player or Windows media player. I think I still need to set some flags for vorbis encoder, but I can't figure out. I would appreciate any suggestions. Best regards For audio encoder I set those parameters: c->sample_fmt = AV_SAMPLE_FMT_FLTP; c->sample_rate = 44100; c->channels = 1; My code to prepare samples in AV_SAMPLE_FMT_FLTP sample format: void TT_Transcode::get_audio_frame_fltp(float *fsamples, int frame_size, int nb_channels) { int j, i; float v; float *q; for (j = 0; j < frame_size; j++) { v = (sin(t) * 10000.0) / 32767.0f; //values between -1.0 and +1.0 fsamples[ j ] = v; for (i = 1; i < nb_channels; i++) { fsamples[i * in_linesize + j] = v; } t += tincr; tincr += tincr2; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars.hammarstrand at gmail.com Sat Mar 30 18:35:28 2013 From: lars.hammarstrand at gmail.com (Lars Hammarstrand) Date: Sat, 30 Mar 2013 18:35:28 +0100 Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? In-Reply-To: References: Message-ID: 2013/3/30 Carl Eugen Hoyos > Lars Hammarstrand writes: > > > If this really is a regression, please consider > > using git bisect to find the version introducing > > it. (This isn't trivial with FFmpeg, but I will > > support you.) > > > > > > Ok, thanks - although I believe it will be a quite > > lengthy and cumbersome process > > It usually takes me <15 minutes, but arguably I know > what to do. > As said, I can help you. > > > as xbmc currently is based on ffmpeg version 0.10.2 > > I may misunderstand: Is the crash only reproducible > with xbmc and not with FFmpeg? > Xbmc is entirely based on the ffmpeg libraries and won't work without it so I'm not really sure what you mean by that. Xbmc starts up just fine and you can navigate through the different menus without any problems but as soon as you try to playback any type of video (file or stream) it will crash. If yes, the bisect will probably not help because > the reason is something like a unsufficiently aligned > buffer within xbmc. > If the crash is reproducible with FFmpeg, you don't > need xbmc to test. Sounds very good, but how? With the ffmpeg tools (ffplay, etc) as a stand alone package directly on ios? > > What do you say about regression testing the binary > > way and start somewhere in the middle. What version > > do you suggest we start with? > > The optimal version to start is of course the one > you know is working fine (if I suggested one "in > the middle" it would not save you more than one > compile). > > If you know that 0.10.2 is working, you could > start with 3c5fe5b (that will save you one > compile). Ok, what action plan do we start with, I mean should we jump direct into the regression testing or what do you suggest? Btw, I got Kxmovie (github.com/kolyvan/kxmovie) to work by configuring ffmpeg with the "*--disable-ASM*" flag, thus I'm pretty sure it's some kind of asm optimization that causes the problem. FYI, kxmovie is also based on ffmpeg n1.2 and crashes in the same place as xbmc (ie at * ff_pred8x8_128_dc_neon*). Question. Most implementations of ffmpeg for ios I've found so far utilizes static ffmpeg libs (like xbcm do). Is there a known problem to use ffmpeg as dylibs on ios? -- Regards, Lars. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexcohn at netvision.net.il Sat Mar 30 19:02:47 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sat, 30 Mar 2013 21:02:47 +0300 Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? In-Reply-To: References: Message-ID: On 30 Mar 2013 20:35, "Lars Hammarstrand" wrote: >> If the crash is reproducible with FFmpeg, you don't need xbmc to test. > Sounds very good, but how? With the ffmpeg tools (ffplay, etc) as a stand alone package directly on ios? Exactly. This makes search for the problematic commit much easier, and also helps to fix it. > Question. Most implementations of ffmpeg for ios I've found so far utilizes static ffmpeg libs (like xbcm do). Is there a known problem to use ffmpeg as dylibs on ios? Static libs are much easier in building, using, and debugging. Shared libs are cool if the OS supports easy ways to reuse them, and upgrade independently from the application that uses them, thus reducing the maintenance efforts (e.g. when a security patch is made for ffmpeg libs). Unfortunately, iOS does not provide such mechanisms. That's why shared ffmpeg libs are rarely used on this platform. BR, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars.hammarstrand at gmail.com Sat Mar 30 19:49:52 2013 From: lars.hammarstrand at gmail.com (Lars Hammarstrand) Date: Sat, 30 Mar 2013 19:49:52 +0100 Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? In-Reply-To: References: Message-ID: 2013/3/30 Alex Cohn > On 30 Mar 2013 20:35, "Lars Hammarstrand" > wrote: > >> If the crash is reproducible with FFmpeg, you don't need xbmc to test. > > Sounds very good, but how? With the ffmpeg tools (ffplay, etc) as a > stand alone package directly on ios? > Exactly. This makes search for the problematic commit much easier, and > also helps to fix it. Cool - didn't know that ffplay was able to run on ios "bare-metal" !! Ssh to the device and start ffplay, just that simple? Must test this at once... :) > Question. Most implementations of ffmpeg for ios I've found so far > utilizes static ffmpeg libs (like xbcm do). Is there a known problem to use > ffmpeg as dylibs on ios? > > Static libs are much easier in building, using, and debugging. Shared libs > are cool if the OS supports easy ways to reuse them, and upgrade > independently from the application that uses them, thus reducing the > maintenance efforts (e.g. when a security patch is made for ffmpeg libs). > > Unfortunately, iOS does not provide such mechanisms. That's why shared > ffmpeg libs are rarely used on this platform. > Ok, thanks. I'll guess that the apple devs are performing some non disclosure magic to make their libs to function as dylibs... -- BR Lars. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars.hammarstrand at gmail.com Sat Mar 30 20:08:18 2013 From: lars.hammarstrand at gmail.com (Lars Hammarstrand) Date: Sat, 30 Mar 2013 20:08:18 +0100 Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? In-Reply-To: References: Message-ID: 2013/3/30 Lars Hammarstrand > 2013/3/30 Alex Cohn > >> On 30 Mar 2013 20:35, "Lars Hammarstrand" >> wrote: >> >> If the crash is reproducible with FFmpeg, you don't need xbmc to test. >> > Sounds very good, but how? With the ffmpeg tools (ffplay, etc) as a >> stand alone package directly on ios? >> Exactly. This makes search for the problematic commit much easier, and >> also helps to fix it. > > > Cool - didn't know that ffplay was able to run on ios "bare-metal" !! Ssh > to the device and start ffplay, just that simple? Must test this at > once... :) > I'm gonna rebuild ffmpeg n1.2 to enable ffplay for ios 5.0 . What do think about the following config flags (copied from the kxmovie build): --disable-ffmpeg --disable-ffserver --disable-ffprobe --disable-doc --disable-bzlib --target-os=darwin --enable-cross-compile --enable-gpl --enable-version3 --arch=arm --cpu=cortex-a8 --enable-pic --extra-cflags='-arch armv7' --extra-ldflags='-arch armv7' --extra-cflags='-mfpu=neon -mfloat-abi=softfp -mvectorize-with-neon-quad' --enable-neon --enable-optimizations --enable-debug=3 --disable-stripping --disable-armv5te --disable-armv6 --disable-armv6t2 --enable-small --cc='/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc' --as='gas-preprocessor.pl/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc' --sysroot='/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS6.1.sdk' --extra-ldflags='-L/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS6.1.sdk/usr/lib/system' -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexcohn at netvision.net.il Sat Mar 30 20:21:15 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sat, 30 Mar 2013 22:21:15 +0300 Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? In-Reply-To: References: Message-ID: On Sat, Mar 30, 2013 at 9:49 PM, Lars Hammarstrand wrote: > Cool - didn't know that ffplay was able to run on ios "bare-metal" !! Ssh to > the device and start ffplay, just that simple? Must test this at once... :) You could try this link for example: http://care2achieve.wordpress.com/tag/ffmpeg-for-ios/ > Ok, thanks. I'll guess that the apple devs are performing some non > disclosure magic to make their libs to function as dylibs... They simply can sign them with Apple private key. From lars.hammarstrand at gmail.com Sat Mar 30 20:55:55 2013 From: lars.hammarstrand at gmail.com (Lars Hammarstrand) Date: Sat, 30 Mar 2013 20:55:55 +0100 Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? In-Reply-To: References: Message-ID: 2013/3/30 Alex Cohn > On Sat, Mar 30, 2013 at 9:49 PM, Lars Hammarstrand > wrote: > > Cool - didn't know that ffplay was able to run on ios "bare-metal" !! > Ssh to > > the device and start ffplay, just that simple? Must test this at > once... :) > Bad news, it seems that ffplay is not supported on ios, just ffprobe ffmpeg and ffserver ;-( If you find a pointer that says otherwise please mail me. I'll guess kxmovie is up for a run... You could try this link for example: > http://care2achieve.wordpress.com/tag/ffmpeg-for-ios Thanks although I've seen that before and unfortunately they don't utilize settings for hw opts for iDevices like xbmc and kxmovie. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars.hammarstrand at gmail.com Sat Mar 30 21:04:55 2013 From: lars.hammarstrand at gmail.com (Lars Hammarstrand) Date: Sat, 30 Mar 2013 21:04:55 +0100 Subject: [Libav-user] Source code debugging libav using Xcode In-Reply-To: References: <83954040-dfad-4402-90ed-66c1423df68a@email.android.com> Message-ID: Make sure you compile the libs for the same target architecture (32 or 64) as your xcode proj. If you use the lldb debugger (standard setting in xcode) and move the libs to another location before you link lldb won't find the source code. If you switch to gdb instead you will be able to use the "directory" command in the debug console to specify source code locations. 2013/3/29 Gary Overall > Thank you for your help. I feel like I am very close. I did as you said, > and it did actually build the libav libraries as I compiled and ran my > project within Xcode. I had to put the location of the libraries in my > header search path to to be able to include the .h files. However, when I > try to call anything in the libraries (ie av_register_all();) I get the > following linker error. I just assumed that since I put the ffmpeg source > tree as a target dependency, that the linker would find the library files. > Am I still missing a critical step?? > > "_av_register_all", referenced from: > > -[FFAppDelegate applicationDidFinishLaunching:] in FFAppDelegate.o > > ld: symbol(s) not found for architecture x86_64 > > clang: error: linker command failed with exit code 1 (use -v to see > invocation) > > > It found the .h files, but it did not seem to find the libraries. > > > Thanks, > > Gary O > > > From: rjvbertin at gmail.com > > Date: Thu, 28 Mar 2013 09:29:52 +0100 > > To: libav-user at ffmpeg.org > > Subject: Re: [Libav-user] Source code debugging libav using Xcode > > > > > In xcode 3, I simply add a new external build target to which I add the > ffmpeg source tree. Add that target as a dependency to your own target, and > you should be set. If you want to be extra sure, build ffmpeg with the same > compiler you use in your project. > > BTW, I have a git project up over on github.com/RJVB that shows how to > make a monolithic framework out of the FFmpeg libs . > > _______________________________________________ > > Libav-user mailing list > > Libav-user at ffmpeg.org > > http://ffmpeg.org/mailman/listinfo/libav-user > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexcohn at netvision.net.il Sat Mar 30 21:32:10 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sat, 30 Mar 2013 23:32:10 +0300 Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? In-Reply-To: References: Message-ID: On Sat, Mar 30, 2013 at 10:55 PM, Lars Hammarstrand wrote: > Bad news, it seems that ffplay is not supported on ios, just ffprobe ffmpeg and ffserver I would say that ffmpeg that takes your h264 input file and converts to image2 should be just fine, as long as the dangerous code in ff_pred8x8_128_dc_neon: (libavcodec/arm/h264pred_neon.S) 0x5bc84: cdpeq p15, #5, c15, c0, c0, #4 is still there. BTW, you must have probably seen the question http://stackoverflow.com/questions/4714903/what-can-cause-exc-bad-instruction-in-dyldbootstrap - TL;NR = use latest xcode, and you will not see EXC_BAD_INSTRUCTION anymore. Alex From alexcohn at netvision.net.il Sat Mar 30 22:00:39 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sun, 31 Mar 2013 00:00:39 +0300 Subject: [Libav-user] Best way to create a fractional ms timer? In-Reply-To: References: Message-ID: On Sat, Mar 30, 2013 at 4:22 PM, Mike Versteeg wrote: > Are there any functions in libav* available that can help me create a > fractional ms timer, say 33.33 ms? Windows is not offering this, and one can > only achieve this by adding code to the timer event, code that can and will > be interrupted by the scheduler. I prefer something more robust.. You can use select(http://msdn.microsoft.com/en-us/library/windows/desktop/ms740141(v=vs.85).aspx ) with dummy FDs and timeout at microsec precision. BR, Alex Cohn From alexcohn at netvision.net.il Sat Mar 30 22:15:56 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sun, 31 Mar 2013 00:15:56 +0300 Subject: [Libav-user] How can I free data buffers missed by avcodec_free_frame? In-Reply-To: References: Message-ID: On Fri, Mar 29, 2013 at 9:27 PM, Czarnek, Matt wrote: > In the description for avcodec_free_frame, it states: > > "Warning: this function does NOT free the data buffers themselves" > > I have allocated my buffers as such: > int curAVFramesize = avpicture_get_size(PIX_FMT_YUV420P, ccontext->width, > ccontext->height); > uint8_t* curAVFramePicBuffer = (uint8_t*)(av_malloc(curAVFramesize)); > AVFrame *curAVFrame=avcodec_alloc_frame(); > avpicture_fill((AVPicture *)curAVFrame,curAVFramePicBuffer, > PIX_FMT_YUV420P,ccontext->width, ccontext->height); > > > I figured that the warning meant calling 'avpicture_free' was nessecary. So > I've been freeing it as: > avpicture_free((AVPicture *)curAVFrame); > avcodec_free_frame((AVFrame **)(&curAVFrame)); See http://ffmpeg.org/doxygen/trunk/group__lavc__picture.html: void avpicture_free (AVPicture *picture): Free a picture previously allocated by avpicture_alloc(). So, you can either switch to use avpicture_alloc(), or get the pic buffer from curAVFrame and use av_free() to free it. > Usually my program doesn't complain but every once in a while, after calling > 'avpicture_free' but before 'avcodec_free_frame' it'll throw a heap > allocation error. > > Here is the entire function: http://pastebin.com/jHecUySU > > Is avpicture_free needed? Any thoughts as to what might be happening? Sometimes, the result of your allocations is not similar to what libavcodec does in avpicture_alloc(). BR, Alex Cohn From cehoyos at ag.or.at Sat Mar 30 22:53:31 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Sat, 30 Mar 2013 21:53:31 +0000 (UTC) Subject: [Libav-user] Reference app with ffmpeg n1.2 libs that works on IOS ? References: Message-ID: Lars Hammarstrand writes: >> >> If the crash is reproducible with FFmpeg, you don't need xbmc to test. >> > Sounds very good, but how? ?With the ffmpeg tools (ffplay, etc) as a >> > stand alone package directly on ios? >> >> Exactly. This makes search for the problematic commit much easier, >> and also helps to fix it. > > Cool - didn't know that ffplay was able to run on ios "bare-metal" Never use ffplay for (real) tests, it depends on an external library that is known to contain bugs (as FFmpeg). Always use ffmpeg (the application) for such tests. Please try to fix your quoting / set your mailer to text-only. Carl Eugen From brado at bighillsoftware.com Sun Mar 31 08:12:42 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Sat, 30 Mar 2013 23:12:42 -0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: <1CD37ACA-6B93-4E3F-9997-89FE9C8B33CE@gmail.com> References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> <46153253-4986-4947-8CDD-4C2EA9664133@bighillsoftware.com> <1CD37ACA-6B93-4E3F-9997-89FE9C8B33CE@gmail.com> Message-ID: <4925EE80-0265-42A9-99E8-5DD355E81301@bighillsoftware.com> On Mar 30, 2013, at 4:16 AM, Ren? J.V. Bertin wrote: > Seriously, why not post the information provided by QTKit, and how you convert it? Seems it could be quite easy to confound QT's time scale and FFmpeg's time_base units? It appears I've discovered what the problem is, however I'm not yet clear on how to fix it. The problem is not in my pts or dts values, or my formulas I'm using to convert QTSampleBuffer presentationTime and decodeTime to time_base units. (In fact, as an aside, I commented all that code out and used the muxing.c pts-setting code and it changed absolutely nothing -- the same problem existed. I am configuring QTKit to have a minimumFrameRate of 24, which is the value I'm using for time_base.den, according to the documentation. What I discovered is that despite configuring this frame rate in QTKit, that's not the actual frame rate being received -- at runtime capture is actually producing closer to 15 fps. I determined this by simply reading the log of pts values to the point where the value was the highest pts <= time_base.den -- and there were about 15 frames that had been consistently processed. So I then just manually hardcoded the time_base.den to 15, and boom, both video and audio are right on the money, completely in sync. The problem is that I don't want (or more properly put, I do not think it would be prudent or bug-free code) to hard-code this value, as I expect frame rate likely will in reality vary, based on computer, camera, etc. At the present, I've got a question out to the QuickTime API users mailing list because there does not appear to be a way to query the actual frame rate being captured from either the sample buffer received, the capture device, or the capture connection. But this raises the question: what is the proper way to deal with a varying frame rate during encoding, so as to properly set pts and dts? It would appear that the intention is for a codec context's time_base to be set once prior to the encoding cycle. So I'd guess that even if I could get a runtime frame rate as capture was taking place, I couldn't change the time_base.den value on the fly during encoding. How would you suggest one deals with this? What happens if you set the time_base.den to the expected frame rate, such as 24, only to actually receive 15 (or some other number) frames per second? How do you deliver proper timings in this scenario? Thanks, Brad From alexcohn at netvision.net.il Sun Mar 31 09:03:40 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sun, 31 Mar 2013 10:03:40 +0300 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: <4925EE80-0265-42A9-99E8-5DD355E81301@bighillsoftware.com> References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> <46153253-4986-4947-8CDD-4C2EA9664133@bighillsoftware.com> <1CD37ACA-6B93-4E3F-9997-89FE9C8B33CE@gmail.com> <4925EE80-0265-42A9-99E8-5DD355E81301@bighillsoftware.com> Message-ID: On Sun, Mar 31, 2013 at 9:12 AM, Brad O'Hearne wrote: > On Mar 30, 2013, at 4:16 AM, Ren? J.V. Bertin wrote: >> Seriously, why not post the information provided by QTKit, and how you convert it? Seems it could be quite easy to confound QT's time scale and FFmpeg's time_base units? > > It appears I've discovered what the problem is, however I'm not yet clear on how to fix it. The problem is not in my pts or dts values, or my formulas I'm using to convert QTSampleBuffer presentationTime and decodeTime to time_base units. (In fact, as an aside, I commented all that code out and used the muxing.c pts-setting code and it changed absolutely nothing -- the same problem existed. > > I am configuring QTKit to have a minimumFrameRate of 24, which is the value I'm using for time_base.den, according to the documentation. What I discovered is that despite configuring this frame rate in QTKit, that's not the actual frame rate being received -- at runtime capture is actually producing closer to 15 fps. I determined this by simply reading the log of pts values to the point where the value was the highest pts <= time_base.den -- and there were about 15 frames that had been consistently processed. So I then just manually hardcoded the time_base.den to 15, and boom, both video and audio are right on the money, completely in sync. > > The problem is that I don't want (or more properly put, I do not think it would be prudent or bug-free code) to hard-code this value, as I expect frame rate likely will in reality vary, based on computer, camera, etc. At the present, I've got a question out to the QuickTime API users mailing list because there does not appear to be a way to query the actual frame rate being captured from either the sample buffer received, the capture device, or the capture connection. > > But this raises the question: what is the proper way to deal with a varying frame rate during encoding, so as to properly set pts and dts? It would appear that the intention is for a codec context's time_base to be set once prior to the encoding cycle. So I'd guess that even if I cb.deould get a runtime frame rate as capture was taking place, I couldn't change the time_base.den value on the fly during encoding. > > How would you suggest one deals with this? What happens if you set the time_base.den to the expected frame rate, such as 24, only to actually receive 15 (or some other number) frames per second? How do you deliver proper timings in this scenario? > > Thanks, > > Brad The trick is that not all containers and codecs support variable frame rate. For example, mp4 with h264 codec allows you to set time_base to a very high value, and set pts for video frames at irregular intervals in terms of this time base. (The latter is a rational number, so the actual expected time is pts*tb.num/tb.den). On the other hand, h263 only supports fixed frame rates, and these from a limited choice of predefined values. And let me repeat again, the timestamps may be set on the container level (avformat) and on the codec level (avcodec), and sometimes the rules of the two levels differ, and the numerical values must be different. And remember, the players may handle some videos differently, when either the spec or the file allow different interpretations. BR, and have a nice holiday, Alex From brado at bighillsoftware.com Sun Mar 31 09:48:15 2013 From: brado at bighillsoftware.com (Brad O'Hearne) Date: Sun, 31 Mar 2013 00:48:15 -0700 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> <46153253-4986-4947-8CDD-4C2EA9664133@bighillsoftware.com> <1CD37ACA-6B93-4E3F-9997-89FE9C8B33CE@gmail.com> <4925EE80-0265-42A9-99E8-5DD355E81301@bighillsoftware.com> Message-ID: On Mar 31, 2013, at 12:03 AM, Alex Cohn wrote: > The trick is that not all containers and codecs support variable frame > rate. Thanks for the reply, Alex. I'm somewhat torn with labeling this a case of a "variable frame rate" -- while being able to handle a variable frame rate would certainly solve the problem, the truth is that the video frames I'm receiving don't appear to actually be variable. The frame rate actually appears to be consistent the entire video (albeit not the frame rate I was initially expecting). To be completely specific about the nature of the problem, it appears the problem is that the frame rate isn't actually known until frames are received, but at the same time the time_base.den which is based on the frame rate needs to be configured prior to receiving captured frames. This has me wondering about the nature of the AVPacket duration property. This is another value whose use is rather vague and inconsistent across example code and documentation. On one hand, the documentation says: "Duration of this packet in AVStream->time_base units, 0 if unknown. Equals next_pts - this_pts in presentation order." Virtually all examples I've read don't set the duration at all, in fact it isn't even referenced. So I'm not exactly sure how duration is used if it can be set and left at zero without a problem, and to what degree there is a relationship between it and pts. Given that QT capture delivers duration time with each sample buffer, then perhaps this can be used to manage the frame rate difference. For example, suppose that the codec context's time_base.den value is set to 30 (for a 30fps). However let's say capture produces and delivers only 15fps. That's essentially the same scenario I'm dealing with (though I'm looking for 24fps, not 30, but 30 makes a good round number for this example). Anyway, suppose that I receive only 15fps from capture, or to restate, half the number of frames as expected by time_base.den, what happens if the duration is set to 2 * 1/frame rate -- meaning that each frame had two frames worth of duration at the time_base frame rate. Will setting a longer packet duration compensate for receiving fewer frames? Thanks, Brad From alexcohn at netvision.net.il Sun Mar 31 10:25:55 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sun, 31 Mar 2013 11:25:55 +0300 Subject: [Libav-user] Video and audio timing / syncing In-Reply-To: References: <8849BACD-1DB9-4C98-A280-5AC348A7B289@bighillsoftware.com> <20130328042544.GG3758@leki> <047B80CE-E744-47C0-AA9E-992B3F78BE92@bighillsoftware.com> <088A5762-926F-4011-9A2B-D9828A7F99F6@bighillsoftware.com> <7261CEEC-778A-4985-9C26-DA8F679B0986@universalx.net> <46153253-4986-4947-8CDD-4C2EA9664133@bighillsoftware.com> <1CD37ACA-6B93-4E3F-9997-89FE9C8B33CE@gmail.com> <4925EE80-0265-42A9-99E8-5DD355E81301@bighillsoftware.com> Message-ID: On Sun, Mar 31, 2013 at 10:48 AM, Brad O'Hearne wrote: > On Mar 31, 2013, at 12:03 AM, Alex Cohn wrote: >> The trick is that not all containers and codecs support variable frame rate. > > Thanks for the reply, Alex. I'm somewhat torn with labeling this a case of a "variable frame rate" -- while being able to handle a variable frame rate would certainly solve the problem, the truth is that the video frames I'm receiving don't appear to actually be variable. The frame rate actually appears to be consistent the entire video (albeit not the frame rate I was initially expecting). To be completely specific about the nature of the problem, it appears the problem is that the frame rate isn't actually known until frames are received, but at the same time the time_base.den which is based on the frame rate needs to be configured prior to receiving captured frames. If your use case allows to buffer enough frames to determine the "actual, fixed but not as requested" frame rate, you can try to set time_base accordingly and create the container accordingly. Otherwise, it is equivalent to variable frame rate in my eyes. Unfortunately, not all containers (avformat) and streams (avcodec) support arbitrary frame rates. Most of these support variable frame rate (i.e. pts may increase not by 1). > This has me wondering about the nature of the AVPacket duration property. This is another value whose use is rather vague and inconsistent across example code and documentation. On one hand, the documentation says: > > "Duration of this packet in AVStream->time_base units, 0 if unknown. Equals next_pts - this_pts in presentation order." > > Virtually all examples I've read don't set the duration at all, in fact it isn't even referenced. So I'm not exactly sure how duration is used if it can be set and left at zero without a problem, and to what degree there is a relationship between it and pts. Given that QT capture delivers duration time with each sample buffer, then perhaps this can be used to manage the frame rate difference. For example, suppose that the codec context's time_base.den value is set to 30 (for a 30fps). However let's say capture produces and delivers only 15fps. That's essentially the same scenario I'm dealing with (though I'm looking for 24fps, not 30, but 30 makes a good round number for this example). Anyway, suppose that I receive only 15fps from capture, or to restate, half the number of frames as expected by time_base.den, what happens if the duration is set to 2 * 1/frame rate -- meaning that each frame had two frames worth of duration at the time_base frame rate. > > Will setting a longer packet duration compensate for receiving fewer frames? I am not sure when "duration" is taken into account, but you could simply set current->pts = prev->pts+2. Note that this was my original proposal. BR, Alex From mike at mikeversteeg.com Sun Mar 31 13:14:20 2013 From: mike at mikeversteeg.com (Mike Versteeg) Date: Sun, 31 Mar 2013 13:14:20 +0200 Subject: [Libav-user] Best way to create a fractional ms timer? In-Reply-To: References: Message-ID: Thanks Alex. This is much like I do now (using Sleep()), but as I mentioned the problem with these one shot timer solutions is the event is easily interrupted giving me +/- 10 ms inaccuracy at times. I ran tests and a periodic timer gives off a much more stable clock. Unless Sleep() itself is more susceptible to interrupts than Select()? So libav* does not offer a periodic timer (with OS interdependency as a bonus)? Certainly explains why I could not find any, which kind of surprised me.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.Rukhlov at gmail.com Sun Mar 31 13:42:58 2013 From: A.Rukhlov at gmail.com (=?KOI8-R?B?4czFy9PBzsTSIPLVyMzP1w==?=) Date: Sun, 31 Mar 2013 15:42:58 +0400 Subject: [Libav-user] Best way to create a fractional ms timer? In-Reply-To: References: Message-ID: timeBeginPeriod() + timeGetTime() +...+ Sleep() + timeEndPeriod() http://msdn.microsoft.com/ru-RU/library/windows/desktop/dd757624%28v=vs.85%29.aspx 2013/3/31 Mike Versteeg > Thanks Alex. This is much like I do now (using Sleep()), but as I > mentioned the problem with these one shot timer solutions is the event is > easily interrupted giving me +/- 10 ms inaccuracy at times. I ran tests and > a periodic timer gives off a much more stable clock. Unless Sleep() itself > is more susceptible to interrupts than Select()? > > So libav* does not offer a periodic timer (with OS interdependency as a > bonus)? Certainly explains why I could not find any, which kind of > surprised me.. > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at mikeversteeg.com Sun Mar 31 16:28:44 2013 From: mike at mikeversteeg.com (mikeversteeg) Date: Sun, 31 Mar 2013 07:28:44 -0700 (PDT) Subject: [Libav-user] Best way to create a fractional ms timer? In-Reply-To: References: Message-ID: <1364740124777-4657120.post@n4.nabble.com> > timeBeginPeriod() + timeGetTime() +...+ Sleep() + timeEndPeriod() Thanks, but I do not think you read my previous posts. -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-Best-way-to-create-a-fractional-ms-timer-tp4657096p4657120.html Sent from the libav-users mailing list archive at Nabble.com. From goverall at hotmail.com Sun Mar 31 17:23:59 2013 From: goverall at hotmail.com (Gary Overall) Date: Sun, 31 Mar 2013 11:23:59 -0400 Subject: [Libav-user] Question:Reading ID3 info from a private stream Message-ID: I have a TS file that presently includes 3 streams:1) h264 Video 2) aac audio 3) See Below The third stream contains some data that is being presented in ID3 format. The stream has a stream-ID of 0xBD (stream type private??). There are several packets (e.g. 5) of this type spread throughout the file, each with it own PTS. The libav functions identify the stream as AVMEDIA_TYPE_AUDIO and AV_CODEC_ID_MP3 but cannot find a codec for this stream. Even though I know there are several (e.g. 5) of these packets spread through the file, each with its own data and PTS, av_read_frame (and ffprobe) gives me only one packet back. The payload data for that single packet contains a concatenation of the data from all (5) packets from this stream. The PTS for this single packet is the PTS for the first of these packets. For my purposes, it does not matter that this stream is mis-identified as an mp3 stream because the data is still available in the packet, but I am trying to find a way to get these back as individual packets, each with its own PTS. Does anyone have intuition regarding why these packets are being consolidated into one? I really don't know the details of why these data packets are in ID3 format, but I do know it is important for me to have the PTS values of each of these packets individually. Thank you in advance for any help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.orr at scala.com Sun Mar 31 20:31:09 2013 From: john.orr at scala.com (John Orr) Date: Sun, 31 Mar 2013 14:31:09 -0400 Subject: [Libav-user] Best way to create a fractional ms timer? In-Reply-To: References: Message-ID: <515880ED.8060308@scala.com> On 3/31/2013 7:42 AM, ????????? ?????? wrote: > timeBeginPeriod() + timeGetTime() +...+ Sleep() + timeEndPeriod() > http://msdn.microsoft.com/ru-RU/library/windows/desktop/dd757624%28v=vs.85%29.aspx > Note that you should not set timeBeginPeriod frequently, one per process is best if you can manage it. In the past I've seen PCs+BIOS lose a little bit of time off of their real time clock when calling timeBeginPeriod() so they get way out of sync if you call then frequently. Here are some still relevant links to timing issues in Windows: http://blogs.msdn.com/b/mediasdkstuff/archive/2009/07/02/why-are-the-multimedia-timer-apis-timesetevent-not-as-accurate-as-i-would-expect.aspx http://www.gamedev.net/page/resources/_/technical/game-programming/timing-pitfalls-and-solutions-r2086 On top of the timing function calls, you may need to raise the priority of the thread that calls Sleep() to something time critical. It was pretty much a necessity with Windows XP and earlier. If you didn't, there is no guarantee that the thread that called Sleep() would regain CPU immediately after the sleep period was finished. Once a non-time-critical thread yields the CPU, the OS has no special reason to reschedule it ahead of any other thread. When the Sleep period finished, the sleeping thread has to get back in line with all the other threads. The rules are different for time critical threads, the can get the CPU back ahead of the rest of the pack. The Windows thread scheduler saw some changes for Vista and later, I'm not as familiar with the behavior there, so I don't know if Microsoft improved that situation. --Johno