Hi,<br><br>I am streaming webcam over udp using h264 raw format (Windows). When I try this:<br><br>ffmpeg -f dshow -r 25 -s 160x120 -i video="A4 tech USB2.0 Camera" udp://<ipaddr>:8888/a.h264<br><br>Changing
from 160x120 to 640x480 (different allowable resolutions), I see there
is significant change in data transmitted over network per minute that I
have monitored.<br>
<br>But, when I try to do the same from my code, either by:<br><br>i-
opening up decoder at native resolution (640x480 default), then scaling
it to required resolution, then stream this scaled frame to network, or<br>ii- opening up decoder at different resolution, using AVDictionary, like:<br>
<br>av_dict_set( &dict, "video_size", "320x240",0);<br>if(avformat_open_input(&this-><div id=":z4">pFormatCtx, this->finalInputName.c_str(), pInputFormat, &dict)!=0)<br> return -1; // Couldn't open file<br>
<br>so I am capturing at 320x240 resolution from example above,
monitoring for above both cases, I get approximately the same data
transmission rate, i.e. around 2.7 MB per minute.<br><br>I use baseline profile and ultrafast preset.<br>
<br>I think using -s 320x240 is the same as setting up dictionary
options to open up a decoder stream with a smaller resolution, but it
doesn't seem to work. <br><br>Logically, capturing at any resolution,
then scaling it, and sending the scaled frame over network should be
taking up less space as well, but that's not the case observed here.<br>
<br>Can anyone provide some guidance what I may be doing wrong?<br><br>Thanks for your time!</div>