[Libav-user] Libav - Is it really difficult

ludi liu_de_yang at qq.com
Tue Jul 14 06:34:47 CEST 2015

  hope this will help:

Hi Tukka
Yes, I am not encoding on the fly.

Why I am using FFMpeg is - 

1.  On remote side, there may be a simulator developed by me or there may be a IP phone developed by any standard company.

2. If in remote side, it is my simulator, then no issue, I can just send data over socket , receive and store

3. If in remote side, it is a standard IP phone (that supports opus), then the standard phone should be able to receive the data and should be able to play it.   

If I simplify the requirement on paper it looks using FFMpeg I should be able to manage it.

But finding it difficult to accomplish as I have very little knowledge on FFmpeg.

Anyhow I have started to put test code, will come up with specific problems.

Best Regards


On Mon, Jul 13, 2015 at 4:53 PM, Tuukka Pasanen <pasanen.tuukka at gmail.com> wrote:
     Just wondering why are using FFmpeg for this kind of stuff? Why not     just send it to socket as reader ask? If I understand correctly you     are not encoding on fly this WEBM.
     13.07.2015, 13:28, Austin Einter       kirjoitti:

       I am trying to use ffmpeg libav, and have been         doing a lot of experiment last 1 month.          I have not been able to get through. Is it really difficult to         use FFmpeg?
         My requirement is simple as below.
         Can you please guide me if ffmpeg is suitable one or I have         implement
         on my own (using codec libs available).
         1. I have a webm file (having VP8 and OPUS frames)
         2. I will read the encoded data and send it to remote guy
         3. The remote guy will read the encoded data from socket
         4. The remote guy will write it to a file (can we avoid         decoding).
         5. Then remote guy should be able to pay the file using ffplay         or any player.
         Now I will take a specific example.
         1. Say I have a file small.webm, containing VP8  and OPUS         frames.
         2. I am reading only audio frames (OPUS) using av_read_frame api         (Then
         checks stream index and filters audio frames only)
         3. So now I have data buffer (encoded) as packet.data and         encoded data
         buffer size as packet.size (Please correct me if wrong)
         4. Here is my first doubt, everytime audio packet size is not         same,
         why the difference. Sometimes packet size is as low as 54 bytes         and
         sometimes it is 420 bytes. For OPUS will frame size vary from         time to
         5. Next say somehow extract a single frame (really do not know         how to
         extract a single frame) from packet and send it to remote guy.
         6. Now remote guy need to write the buffer to a file. To write         the
         file we can use av_interleaved_write_frame or av_write_frame         api. Both
         of them takes AVPacket as argument. Now I can have a AVPacket,         set its
         data and size member. Then I can call av_write_frame api. But         that
         does not work. Reason may be one should set other members in         packet
         like ts, dts, pts etc. But I do not have such informations to         set.
         Can somebody help me to learn if FFmpeg is the right choice, or         should
         I write a custom logic like parse a opus file and get frame by         frame.

_______________________________________________ Libav-user mailing list Libav-user at ffmpeg.org http://ffmpeg.org/mailman/listinfo/libav-user           
 Libav-user mailing list
 Libav-user at ffmpeg.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://ffmpeg.org/pipermail/libav-user/attachments/20150714/973fc0fe/attachment.html>

More information about the Libav-user mailing list