[FFmpeg-user] FFmpeg: how to output over HTTP

Glenn W wolfe.t.glenn at gmail.com
Wed Mar 20 20:31:03 EET 2019


> btw, did you set client_max_body_size  in nginx.conf ? something like
1000000M :-)

Yes, see:

 "Now, setting my nginx configuration to not check for the body size by
setting max to 0 in the ingress configuration template, I tried again: "

I set to zero so as to forgo the body size check altogether, as can be seen
here:
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size

That is how I got to not throw same error.

However, as described, I am still experiencing problem ingesting the
packets.



On Wed, Mar 20, 2019 at 1:32 PM andrei ka <andrei.k.gml at gmail.com> wrote:

> btw, did you set client_max_body_size  in nginx.conf ? something like
> 1000000M :-)
>
> On Wed, Mar 20, 2019 at 6:03 PM Glenn W <wolfe.t.glenn at gmail.com> wrote:
>
> > Thanks Moritz,
> >
> > > File share host. ;-)
> >
> > So true. Please see below link for more full example log from my local
> > tests:
> >
> >
> >
> https://drive.google.com/file/d/1HMS64eBDpTRlFwVarz8G75IffHzgqvCV/view?usp=sharing
> >
> > > I would guess that this trace was started *after* the HTTP
> > connection was established.
> >
> > This time I definitely started the trace before I even started either
> > ffmpeg command in my terminal and still see similar behavior (many TCP
> > connections before any HTTP header). You can still see how the POSTS are
> > disparately spread throughout the packets. I'm confused about this
> > behavior.
> >
> > > I'm totally convinced that if you use ffmpeg the way you are doing, you
> > use HTTP only.
> >
> > So are you thinking all those TCP [PSH, ACK] packets are contained
> within a
> > given HTTP POST?
> >
> > If you look at the first few packets, I see that FFmpeg first sends a [
> SYN
> > ], followed by a [ SYN, ACK ] response from server, followed by an [ ACK
> ]
> > from client. At this point, should it not initiate HTTP protocol with
> > header and endpoint? Instead it starts sending TCP [PSH, ACK]'s.
> >
> > Perhaps I am confused how this should work.  What ends up happening if I
> > point this at my HTTP load balancer is the message is never passed to the
> > backend service properly. It never is able to establish a connection.
> >
> > > Does it also expect particular ports? It will need to be configured to
> > understand the same ports, right?
> >
> > I use an nginx http ingress to load balance http headers to respective
> > backend services listening on TCP ports on various pods. In this case, my
> > ingress would load balance HTTP <an-example-endpoint>/video to port 5558
> on
> > whichever pod. That pod will be running the same listen command.
> >
> > *UPDATE: *
> >
> > After digging into this more, I think I found what was going wrong!
> Looking
> > at my nginx-controller logs, I see that I am getting an error because the
> > `client intended to send too large chunked body` :
> >
> > ```
> > 2019/03/20 16:02:41 [warn] 324#324: *3468886 a client request body is
> > buffered to a temporary file /tmp/client-body/0000000009, client:
> > 10.52.0.1, server: live.nycv.biz, request: "POST /push HTTP/1.1", host:
> "
> > live.nycv.biz"
> > 2019/03/20 16:02:42 [error] 324#324: *3468886 client intended to send too
> > large chunked body: 1046500+32768 bytes, client: 10.52.0.1, server:
> > live.nycv.biz, request: "POST /push HTTP/1.1", host: "live.nycv.biz"
> > ```
> >
> > As before, when I was sending to server, after about a minute of sending
> > the video, I would see an error on the client side from FFmpeg:
> >
> > ```
> > av_interleaved_write_frame(): Broken pipeB time=00:00:35.10
> > bitrate=17993.7kbits/s speed=0.399x
> > Error writing trailer of http://live.nycv.biz/push: Broken pipe
> > ```
> >
> > Now, setting my nginx configuration to not check for the body size by
> > setting max to 0 in the ingress configuration template, I tried again:
> >
> > (Side note: Is there any way to tell FFmpeg to set a maximum on the
> chunked
> > bodies before it sends the HTTP POST? )
> >
> > *From Client (sending side) : *
> >
> > chill at life ~$ ffmpeg -re -i hq-video.mp4 -c:v libx264 -an -f mpegts
> > http://live.nycv.biz/push
> > ffmpeg version 4.1.1 Copyright (c) 2000-2019 the FFmpeg developers
> >   built with Apple LLVM version 10.0.0 (clang-1000.11.45.5)
> >   configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1.1 --enable-shared
> > --enable-pthreads --enable-version3 --enable-hardcoded-tables
> > --enable-avresample --cc=clang
> >
> >
> --host-cflags='-I/Library/Java/JavaVirtualMachines/openjdk-11.0.2.jdk/Contents/Home/include
> >
> >
> -I/Library/Java/JavaVirtualMachines/openjdk-11.0.2.jdk/Contents/Home/include/darwin'
> > --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl
> > --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus
> > --enable-librubberband --enable-libsnappy --enable-libtesseract
> > --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264
> > --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig
> > --enable-libfreetype --enable-frei0r --enable-libass
> > --enable-libopencore-amrnb --enable-libopencore-amrwb
> --enable-libopenjpeg
> > --enable-librtmp --enable-libspeex --enable-videotoolbox
> --disable-libjack
> > --disable-indev=jack --enable-libaom --enable-libsoxr
> >   libavutil      56. 22.100 / 56. 22.100
> >   libavcodec     58. 35.100 / 58. 35.100
> >   libavformat    58. 20.100 / 58. 20.100
> >   libavdevice    58.  5.100 / 58.  5.100
> >   libavfilter     7. 40.101 /  7. 40.101
> >   libavresample   4.  0.  0 /  4.  0.  0
> >   libswscale      5.  3.100 /  5.  3.100
> >   libswresample   3.  3.100 /  3.  3.100
> >   libpostproc    55.  3.100 / 55.  3.100
> > Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'hq-video.mp4':
> >   Metadata:
> >     major_brand     : mp42
> >     minor_version   : 0
> >     compatible_brands: isommp42
> >     creation_time   : 2019-02-11T22:01:43.000000Z
> >     location        : +40.7298-073.9904/
> >     location-eng    : +40.7298-073.9904/
> >     com.android.version: 9
> >     com.android.capture.fps: 30.000000
> >   Duration: 00:01:42.85, start: 0.000000, bitrate: 42251 kb/s
> >     Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661),
> yuvj420p(pc,
> > bt470bg/bt470bg/smpte170m), 3840x2160, 42033 kb/s, SAR 1:1 DAR 16:9,
> 29.95
> > fps, 30 tbr, 90k tbn, 180k tbc (default)
> >     Metadata:
> >       creation_time   : 2019-02-11T22:01:43.000000Z
> >       handler_name    : VideoHandle
> >     Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz,
> > stereo, fltp, 156 kb/s (default)
> >     Metadata:
> >       creation_time   : 2019-02-11T22:01:43.000000Z
> >       handler_name    : SoundHandle
> > Stream mapping:
> >   Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
> > Press [q] to stop, [?] for help
> > [libx264 @ 0x7fe0e4803e00] using SAR=1/1
> > [libx264 @ 0x7fe0e4803e00] using cpu capabilities: MMX2 SSE2Fast SSSE3
> > SSE4.2 AVX FMA3 BMI2 AVX2
> > [libx264 @ 0x7fe0e4803e00] profile High, level 5.1
> > Output #0, mpegts, to 'http://live.nycv.biz/push':
> >   Metadata:
> >     major_brand     : mp42
> >     minor_version   : 0
> >     compatible_brands: isommp42
> >     com.android.capture.fps: 30.000000
> >     location        : +40.7298-073.9904/
> >     location-eng    : +40.7298-073.9904/
> >     com.android.version: 9
> >     encoder         : Lavf58.20.100
> >     Stream #0:0(eng): Video: h264 (libx264), yuvj420p(pc), 3840x2160 [SAR
> > 1:1 DAR 16:9], q=-1--1, 30 fps, 90k tbn, 30 tbc (default)
> >     Metadata:
> >       creation_time   : 2019-02-11T22:01:43.000000Z
> >       handler_name    : VideoHandle
> >       encoder         : Lavc58.35.100 libx264
> >     Side data:
> >       cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
> > frame= 3080 fps= 11 q=-1.0 Lsize=  254544kB time=00:01:42.76
> > bitrate=20290.8kbits/s speed=0.379x
> > video:235782kB audio:0kB subtitle:0kB other streams:0kB global
> headers:0kB
> > muxing overhead: 7.957043%
> > [libx264 @ 0x7fe0e4803e00] frame I:13    Avg QP:20.17  size:930179
> > [libx264 @ 0x7fe0e4803e00] frame P:992   Avg QP:23.19  size:154103
> > [libx264 @ 0x7fe0e4803e00] frame B:2075  Avg QP:27.27  size: 36857
> > [libx264 @ 0x7fe0e4803e00] consecutive B-frames:  9.0%  2.2%  3.7% 85.1%
> > [libx264 @ 0x7fe0e4803e00] mb I  I16..4:  0.9% 86.0% 13.1%
> > [libx264 @ 0x7fe0e4803e00] mb P  I16..4:  0.1%  4.5%  0.3%  P16..4: 48.4%
> > 9.9%  8.0%  0.0%  0.0%    skip:28.8%
> > [libx264 @ 0x7fe0e4803e00] mb B  I16..4:  0.0%  1.5%  0.1%  B16..8: 42.0%
> > 1.4%  0.4%  direct: 1.0%  skip:53.7%  L0:42.0% L1:56.0% BI: 2.1%
> > [libx264 @ 0x7fe0e4803e00] 8x8 transform intra:92.0% inter:60.7%
> > [libx264 @ 0x7fe0e4803e00] coded y,uvDC,uvAC intra: 91.2% 56.8% 13.3%
> > inter: 14.0% 8.8% 0.2%
> > [libx264 @ 0x7fe0e4803e00] i16 v,h,dc,p: 17% 29% 23% 30%
> > [libx264 @ 0x7fe0e4803e00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 13% 16% 26%  6%
> > 7%  8%  8%  7%  9%
> > [libx264 @ 0x7fe0e4803e00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 17% 22% 12%  7%
> > 10%  8%  9%  7%  8%
> > [libx264 @ 0x7fe0e4803e00] i8c dc,h,v,p: 59% 21% 17%  3%
> > [libx264 @ 0x7fe0e4803e00] Weighted P-Frames: Y:0.7% UV:0.0%
> > [libx264 @ 0x7fe0e4803e00] ref P L0: 69.0% 13.9% 14.2%  2.8%  0.0%
> > [libx264 @ 0x7fe0e4803e00] ref B L0: 93.7%  5.6%  0.8%
> > [libx264 @ 0x7fe0e4803e00] ref B L1: 94.1%  5.9%
> > [libx264 @ 0x7fe0e4803e00] kb/s:18777.01
> >
> > Looks like everything went through alright from client perspective, *no
> > more `broken pipe error`*.
> >
> > *However, *
> >
> > From Server Side (receiving):
> >
> > I am still not getting any data through:
> >
> > Inside my kubernetes pod:
> >
> > ```
> > / # . scripts/ffmpeg-listen.sh
> > current time @ 1553099678
> > ffmpeg version 4.1.1 Copyright (c) 2000-2019 the FFmpeg developers
> >   built with gcc 6.4.0 (Alpine 6.4.0)
> >   configuration: --disable-debug --disable-doc --disable-ffplay
> > --enable-ffmpeg --enable-protocol=rtp --enable-protocol=udp
> > --enable-protocol=file --enable-protocol=crypto --enable-protocol=data
> > --enable-encoder=mp4 --enable-encoder=rtp --enable-decoder=rtp
> > --enable-encoder='rawvideo,libx264' --enable-decoder=h264
> > --enable-encoder=h264 --enable-muxer=segment
> > --enable-muxer='stream_segment,ssegment' --enable-muxer='rawvideo,mp4'
> > --enable-muxer=rtsp --enable-muxer=h264 --enable-demuxer=rawvideo
> > --enable-demuxer=mov --enable-demuxer=h264 --enable-demuxer=rtsp
> > --enable-parser=h264 --enable-parser=mpeg4 --enable-avcodec
> > --enable-avformat --enable-avfilter --enable-gpl --enable-small
> > --enable-libx264 --enable-nonfree --enable-openssl
> >   libavutil      56. 22.100 / 56. 22.100
> >   libavcodec     58. 35.100 / 58. 35.100
> >   libavformat    58. 20.100 / 58. 20.100
> >   libavdevice    58.  5.100 / 58.  5.100
> >   libavfilter     7. 40.101 /  7. 40.101
> >   libswscale      5.  3.100 /  5.  3.100
> >   libswresample   3.  3.100 /  3.  3.100
> >   libpostproc    55.  3.100 / 55.  3.100
> > Splitting the commandline.
> > Reading option '-loglevel' ... matched as option 'loglevel' (set logging
> > level) with argument 'debug'.
> > Reading option '-listen' ... matched as AVOption 'listen' with argument
> > '1'.
> > Reading option '-i' ... matched as input url with argument '
> > http://0.0.0.0:5558'.
> > Reading option '-c:v' ... matched as option 'c' (codec name) with
> argument
> > 'h264'.
> > Reading option '-r' ... matched as option 'r' (set frame rate (Hz value,
> > fraction or abbreviation)) with argument '30'.
> > Reading option '-flags' ... matched as AVOption 'flags' with argument
> > '+cgop'.
> > Reading option '-g' ... matched as AVOption 'g' with argument '30'.
> > Reading option '-hls_segment_filename' ... matched as AVOption
> > 'hls_segment_filename' with argument '/fuse/tmp/file%03d.ts'.
> > Reading option '-hls_time' ... matched as AVOption 'hls_time' with
> argument
> > '1'.
> > Reading option '/fuse/tmp/out.m3u8' ... matched as output url.
> > Reading option '-y' ... matched as option 'y' (overwrite output files)
> with
> > argument '1'.
> > Finished splitting the commandline.
> > Parsing a group of options: global .
> > Applying option loglevel (set logging level) with argument debug.
> > Applying option y (overwrite output files) with argument 1.
> > Successfully parsed a group of options.
> > Parsing a group of options: input url http://0.0.0.0:5558.
> > Successfully parsed a group of options.
> > Opening an input file: http://0.0.0.0:5558.
> > [NULL @ 0x5610666ab240] Opening 'http://0.0.0.0:5558' for reading
> > [http @ 0x5610666abc00] Setting default whitelist
> > 'http,https,tls,rtp,tcp,udp,crypto,httpproxy'
> > http://0.0.0.0:5558: End of file
> > ```
> >
> > With no indication that any data actually made it though to the pod,
> > however, I did notice that the EOF did come about the exact same time
> that
> > the client side process finished, which is quite peculiar.
> >
> >
> > And finally, here are the logs from the nginx-controller:
> >
> > ```
> > 10.52.0.1 - [10.52.0.1] - - [20/Mar/2019:16:50:47 +0000] "POST /push
> > HTTP/1.1" 499 0 "-" "Lavf/58.20.100" 260727893 270.979
> > [default-live-rtsp-in-5558] 10.52.0.32:5558 0 0.001 -
> > 8a27399690c628bd357ccf7216bf4aa6
> > ```
> >
> > Simply a POST with no error... this would indicate strangely that a
> single
> > chunked POST is coming with after all the packets have been sent, which
> is
> > a bit crazy. I would much rather it break it up into many chunked POSTs.
> >
> >
> > This is quite a mystery to me what is going wrong. I really appreciate
> your
> > help in getting to the bottom of this.
> >
> > Best,
> > Glenn W
> >
> > On Wed, Mar 20, 2019 at 8:20 AM Ted Park <kumowoon1025 at gmail.com> wrote:
> >
> > > > In my trace which I had analyzed as a proof of concept, HTTP was
> > > > perfectly recognized. I had used port 8888, while my Wireshark's HTTP
> > > > protocol seems to be configured to
> > > > "80,3128,3132,5985,8080,8088,11371,1900,2869,2710". So Wireshark
> should
> > > > be smart enough...
> > >
> > > You must be right, the only thing I can think of is if the capture was
> > > started after ffmpeg was run and only the chunked content was captured
> > that
> > > can’t be decoded without the headers.
> > >
> > > >> The more important question is if there is an issue: Segmentation
> > Fault
> > > 11
> > > >
> > > > This is not an issue. I can segment the mpegts chunks just fine. This
> > > error
> > > > is coming due to the nature of how I am running my server side `-f
> null
> > > -`
> > > > to output null to stdin (not how I would normally) for the sole
> purpose
> > > of
> > > > testing network transport.
> > >
> > > You misunderstand, the message isn’t regarding segmenting the stream,
> it
> > > might indicate an issue within ffmpeg itself
> > > _______________________________________________
> > > ffmpeg-user mailing list
> > > ffmpeg-user at ffmpeg.org
> > > https://ffmpeg.org/mailman/listinfo/ffmpeg-user
> > >
> > > To unsubscribe, visit link above, or email
> > > ffmpeg-user-request at ffmpeg.org with subject "unsubscribe".
> > _______________________________________________
> > ffmpeg-user mailing list
> > ffmpeg-user at ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-user
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-user-request at ffmpeg.org with subject "unsubscribe".
> _______________________________________________
> ffmpeg-user mailing list
> ffmpeg-user at ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-request at ffmpeg.org with subject "unsubscribe".


More information about the ffmpeg-user mailing list