[FFmpeg-user] Glossary: Nyquist
Anatoly
anatoly at kazanfieldhockey.ru
Sat Oct 3 13:41:40 EEST 2020
On Fri, 2 Oct 2020 20:47:57 -0400
"Mark Filipak (ffmpeg)" <markfilipak at bog.us> wrote:
>In your scenario, your eyes do see 640x480. Your brain does see
>640x480. But in order to cleanly 'see' a black-white edge inside those
>640x480 dots, the 640x480 dots need to be made from 1280x960 samples
>within the camera. If the camera made 640x480, then, yes, you would see
>that edge at 320x240 effective resolution (i.e. fuzzier).
I may not agree with it. What is algorithm to downsample from 1280 to
640 do you propose? How it will differ from natural optical process
of splitting one white dot of projected on CCD image into two CCD
pixels sampled values?
-snip-
> Okay, 2 thought experiments:
> 1 - Imagine a film scanner sampling a film frame line by line. Isn't
> the scanner making a signal that the sampler uses to make samples? If
> you think that Nyquist applies only to signals, then, there's your
> signal. 2 - What about a CCD array that makes all the samples at one
> time? Doesn't that expand the signal to 2 dimensions?
Then I guessed you want to say different thing:
In order to reproduce image of 640 black-white alternating vertical
stripes *guaranteed clarly, without possible interference*, we need
horizontal resolution from CCD to LCD of 1280.
This is true, but there is nothing to deal with Nyquist criteria as it
is trivial to all technicians that Nyquist determinies *minimum*
necessary sampling rate (temporal or spatial if you wish) to
have possibility to reproduce given oroginal.
Same thing with sound, for ex. 44100Hz sampling rate is minimal
requirement to digitize 22050Hz wave, but quality of that digitized wave
will be low, you easily can get interference with sampling freqency.
But al least you can do it, because it satisfy Nyquist criteria.
> >... it describes the minimum sampling rate ...
Yes it is.
>
> Nyquist has nothing to do with rate.
Temporal rate in analogue video signal is just another representation of
spatial rate of CCD pixels.
> By the way, I've given up trying to make an illustration of
> 2-dimensional Nyquist sampling. It's too hard.
I think is's easy. Just slale dows to every one dimesion to tart from.
Lets draw XY plot of one line of our picture of alternating black-white
stripes
Voltage ^
-or- |
Light |
intencity | b w b w
| ___ ___
| / \ / \
|___/ \___/ \_
|_______________________> Time -or- position
--|----|----|----|--- samples
_ _ _ _
/ \__/ \__/ \__/ \_ sampling freq -or- distance.
Here we are digitizing 4 pixels. Does not matter how they are separated
one from another - temporarily (analogue video signal) or spatialy
(laying on CCD silicone surface). Nyquist criteria says that to
digitize (somehow) 4 pixels we need to take 4 samples. Note that
our "signal" frequency (again, temporal or spatial) is 1/2 of sampling
frequency. That is it.
Maybe it's a fun to discuss such a things, but I think here is not
right place to do it, beacuse it has no straight relation to ffmpeg
usage.
More information about the ffmpeg-user
mailing list