[FFmpeg-trac] #7768(undetermined:new): ffmpeg does not handle HTTP read errors (e.g. from cloud storage)

FFmpeg trac at avcodec.org
Mon Mar 4 22:23:34 EET 2019

#7768: ffmpeg does not handle HTTP read errors (e.g. from cloud storage)
             Reporter:  dprestegard  |                     Type:
                                     |  enhancement
               Status:  new          |                 Priority:  normal
            Component:               |                  Version:
  undetermined                       |  unspecified
             Keywords:  http         |               Blocked By:
             Blocking:               |  Reproduced by developer:  0
Analyzed by developer:  0            |
 Summary of the bug:
 When reading large source files via https (e.g. a signed URL on AWS S3 or
 similar object storage), ffmpeg does not handle HTTP errors gracefully.
 Errors are occasionally expected when using services like S3, and it's
 expected that the application handle them via a retry mechanism.

 ffmpeg seems to interpret an error as the end of the input file. In other
 words, if an error is encountered it simply stops encoding and finishes
 writing the output. This means the output file will be truncated.

 Ideally ffmpeg would be able to retry when hitting errors. This would
 enable reliable processing of large files in cloud storage systems without
 implementing a "split and stitch" or "chunked encoding" methodology.
 Although these approaches are feasible and widely used, they have some
 impact to quality and complexity.

 How to reproduce:
 Perform any transcode of a large (100 GB +) file via S3 signed URL. It
 will most likely produce a truncated output.

 Patches should be submitted to the ffmpeg-devel mailing list and not this
 bug tracker.

Ticket URL: <https://trac.ffmpeg.org/ticket/7768>
FFmpeg <https://ffmpeg.org>
FFmpeg issue tracker

More information about the FFmpeg-trac mailing list