[FFmpeg-devel] [PATCH] Implement NewTek NDI support
Maksym Veremeyenko
verem at m1stereo.tv
Sun Jul 23 22:43:58 EEST 2017
On 23.07.2017 13:20, Nicolas George wrote:
> Le quintidi 5 thermidor, an CCXXV, Maksym Veremeyenko a écrit :
[...]
>> Subject: [PATCH] Implement NewTek NDI support
>
> Nit: "lavf: implement..."
>
would it be ok:
Subject: [PATCH] lavf: implement NewTek NDI support
[...]
> I would prefer libndi_newteck, or maybe just libndi. Because if this
> protocol is successful, in one or two years there may be GSoC student
> implementing it natively for FFmpeg, and it would get the short name.
>
would you *newtek_ndi* or *libndi_newtek"
> Could they be convinced to release their code with an usable licence?
>
no answers currently, i will clarify.
[...]
>> diff --git a/doc/indevs.texi b/doc/indevs.texi
>> index 09e3321..7e9a5f1 100644
>> --- a/doc/indevs.texi
>> +++ b/doc/indevs.texi
>> @@ -327,6 +327,60 @@ ffmpeg -channels 16 -format_code Hi50 -f decklink -i 'UltraStudio Mini Recorder'
>>
>> @end itemize
>>
>> + at section ndi
>> +
>> +The ndi input device provides capture capabilities for using NDI (Network
>> +Device Interface, standard created by NewTek).
>> +
>
> Please add a few words about the syntax for the input "filename".
>
input filename is a /ndi source name/ that could be found by sending
-find_sources 1 to command line - it has no specific syntax but
human-readable formatted.
>> +To enable this input device, you need the NDI SDK and you
>> +need to configure with the appropriate @code{--extra-cflags}
>> +and @code{--extra-ldflags}.
>
> Could they be convinced to provide a pkg-config file?
>
for now only libs and headers (examples also) files provided.
[...]
>> + at example
>
>> +ffmpeg -f ndi -find_sources 1 -i foo
>
> "foo" is not very elegant; "dummy"?
>
>> + at end example
>> +
>> + at item
>> +Restream to NDI:
>> + at example
>
>> +ffmpeg -f ndi -i "DEV-5.INTERNAL.M1STEREO.TV (NDI_SOURCE_NAME_1)" -acodec copy -vcodec wrapped_avframe -f ndi -y NDI_SOURCE_NAME_2
>
> Nowadays, I think we prefer "-c:a" and "-c:v".
>
ok
> Is -vcodec wrapped_avframe really necessary? It should be automatic.
>
yes, i can remove that if you prefer.
>> +#include <pthread.h>
>
>> +//#include <semaphore.h>
>
> To remove before applying.
>
i will
>> +struct ndi_ctx {
>
> Nit: NDIContext for consistency.
>
ok
>> + const AVClass *cclass;
>> +
>
>> + void *ctx;
>
> Looks unused.
>
it used for av_log in reader thread
>> +
>> + /* Options */
>> + int find_sources;
>> + int wait_sources;
>> + int allow_video_fields;
>> + int max_frames_probe;
>> + int reference_level;
>> +
>> + /* Runtime */
>
>> + NDIlib_recv_create_t* recv;
>
> Here and everywhere: nit: the pointer mark * belongs with the variable
> or field, not with the type.
>
will be fixed
>> + pthread_t reader;
>
>> + int f_started, f_stop, dropped;
>
> "f_"? What does it mean?
>
flag
bad prefix?
>> +
>> + /* Streams */
>> + AVStream *video_st, *audio_st;
>> + AVPacketQueue queue;
>
>> + AVFormatContext *avctx;
>
> All the functions are already called with avctx as a parameter, no need
> for a back pointer.
>
as mentioned above, it used in a thread
>> + int interlaced;
>> +};
>> +
>> +static int ndi_enqeue_video(struct ndi_ctx *ctx, NDIlib_video_frame_t *v)
>> +{
>> + AVPacket pkt;
>> + av_init_packet(&pkt);
>> +
>
>> + pkt.dts = pkt.pts = av_rescale_q(v->timecode, (AVRational){1, 10000000LL}, ctx->video_st->time_base);
>
> The remark about not using "long" applies to qualified integers, of
> course.
>
> Nit: #define NDI_TIMEBASE and use it everywhere.
>
ok
>> + pkt.duration = av_rescale_q(1, (AVRational){v->frame_rate_D, v->frame_rate_N}, ctx->video_st->time_base);
>
> Unless I am mistaken, video_st->time_base is set to
> (AVRational){v->frame_rate_D, v->frame_rate_N}. Therefore, I suggest to
> drop this av_rescale_q() and add an av_assert1() instead (be sure to
> always use --assert-level=2 when developing).
>
i still have some doubts about stream time_base used, i think it should
be 1/10000000 like their timecode value, rather then framerate, because
if source will provide variable frame rate it would be hard to compute
real frame duration
>> +
>> + av_log(ctx->avctx, AV_LOG_DEBUG, "%s: pkt.dts = pkt.pts = %"PRId64", duration=%"PRId64", timecode=%"PRId64"\n",
>
>> + __FUNCTION__, pkt.dts, pkt.duration, v->timecode);
>
> __FUNCTION__ is not standard, but used elsewhere in the code; __func__
> is more standard and also used in the code. But printing the function
> name in debug messages is not usually done in the code. Better just make
> sure that the message is easily greppable.
>
> Same below of course.
>
i can use __func__ but if you prefer, i will drop this.
[...]
>> +// NDIlib_recv_free_video(ctx->recv, v);
>
> Cleanup?
>
i wanted to avoid double copying of video frame data and use
av_free_packet's call to free packet data, but as i see now, frame data
allocated in SDK's code by new operator
>> +static void* ndi_reader(void* p)
>> +{
>
>> + struct ndi_ctx *ctx = (struct ndi_ctx *)p;
>
> Unnecessary (and thus harmful) cast.
>
ok
>> +
>
>> + while (!ctx->f_stop)
>> + {
>
> Nit: braces on the same line. Same below.
>
ok
>> + NDIlib_video_frame_t v;
>> + NDIlib_audio_frame_t a;
>> + NDIlib_metadata_frame_t m;
>> + NDIlib_frame_type_e t;
>> +
>> + t = NDIlib_recv_capture(ctx->recv, &v, &a, &m, 40);
>> +
>
>> + if (NDIlib_frame_type_video == t)
>
> The test looks strange in that direction.
>
why? that approach could save from mistypes like
if (t = NDIlib_frame_type_video)
>> + ndi_enqeue_video(ctx, &v);
>> + else if (NDIlib_frame_type_audio == t)
>> + ndi_enqeue_audio(ctx, &a);
>> + else if (NDIlib_frame_type_metadata == t)
>> + NDIlib_recv_free_metadata(ctx->recv, &m);
>
> Is there a risk of leak if a new type of packet is invented? Looks like
> a bad API design by NewTek.
>
API in a developing stage and not final - we can expect everythign
[...]
>> + for (i = 0; i < n; i++)
>> + {
>> + av_log(avctx, AV_LOG_INFO, "\t'%s'\t'%s'\n", ndi_srcs[i].p_ndi_name, ndi_srcs[i].p_ip_address);
>> + if (!strcmp(name, ndi_srcs[i].p_ndi_name)) {
>> + *source_to_connect_to = ndi_srcs[i];
>
>> + j = i;
>
> break?
>
yes
>> + }
>> + }
>> +
>> + NDIlib_find_destroy(ndi_find);
>> +
>
>> + return j;
>
> j must be unsigned, then.
>
negative j mean negative result of find
>> + if (!NDIlib_initialize()) {
>> + av_log(avctx, AV_LOG_ERROR, "NDIlib_initialize failed.\n");
>
>> + return AVERROR(EIO);
>
> AVERROR_EXTERNAL (unless the library provides error codes that could be
> translated).
>
ok
>> + }
>> +
>> + /* Find available sources. */
>
>> + ret = ndi_find_sources(avctx, avctx->filename, &recv_create_desc.source_to_connect_to);
>> + if (ctx->find_sources) {
>
> The find_sources option looks unnecessary if the sources are always
> printed and it only causes an error.
>
it is required to exit from find sources request like show_devices in a
decklink module
>> + /* Probe input */
>> + for (ret = 0, i = 0; i < ctx->max_frames_probe; i++)
>
> This is already implemented, in a more generic and uniform way, in
> libavformat. Drop the loop and move the format-probing code into
> ndi_read_packet().
>
could you give me a hint/examples?
[...]
>> + if (NDIlib_FourCC_type_UYVY == v.FourCC || NDIlib_FourCC_type_UYVA == v.FourCC) {
>> + st->codecpar->format = AV_PIX_FMT_UYVY422;
>
> Looks strange: the A in UYVA means alpha, does it not? What happens to
> the alpha channel?
>
according to documentation:
[...]
<------>// This is a UYVY buffer followed immediately by an alpha
channel buffer.^M
<------>// If the stride of the YCbCr component is "stride", then the
alpha channel^M
<------>// starts at image_ptr + yres*stride. The alpha channel stride
is stride/2.^M
<------>NDIlib_FourCC_type_UYVA = NDI_LIB_FOURCC('U', 'Y', 'V', 'A')^M
[...]
currently alpha channel is ignored
>> + pthread_create(&ctx->reader, NULL, ndi_reader, ctx);
>
> Missing error check.
>
ok
>> + AVFrame *frame = ctx->video_st ? ctx->video_st->codec->coded_frame : NULL;
>
> This line produces a warning, does it not? Anyway, you are not supposed
> to use st->codec any longer.
>
i blindly copying further code from decklink module that provides
setting interlaced flag
[...]
>> + ctx->f_stop = 1;
>
> This needs to be protected by a memory barrier, here and in the thread.
>
thread is reading, main app is writing only... but if you give me a
example from ffmpeg's code or more appropriate approach for notifying
thread to exit, i will reimplement it
>> + { "find_sources", "Find available sources" , OFFSET(find_sources), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, DEC },
>
> AV_OPT_TYPE_BOOL
>
OK
>> + { "wait_sources", "Time to wait until the number of online sources have changed (ms)" , OFFSET(wait_sources), AV_OPT_TYPE_INT, { .i64 = 500 }, 100, 10000, DEC },
>
> AV_OPT_TYPE_DURATION
>
OK
>> + { "allow_video_fields", "When this flag is FALSE, all video that you receive will be progressive" , OFFSET(allow_video_fields), AV_OPT_TYPE_INT, { .i64 = 1 }, 0, 1, DEC },
>
> AV_OPT_TYPE_BOOL
>
OK
>> +AVInputFormat ff_ndi_demuxer = {
>> + .name = "ndi",
>
>> + .long_name = NULL_IF_CONFIG_SMALL("NDI input"),
>
> "Network Device Interface (NDI) input using NewTek library"
>
ok
>> + avframe = av_frame_clone(tmp);
>> + if (!avframe) {
>> + av_log(avctx, AV_LOG_ERROR, "Could not clone video frame.\n");
>
>> + return AVERROR(EIO);
>
> AVERROR(ENOMEM)
>
ok
>> + }
>> +
>> + ctx->video->timecode = av_rescale_q(pkt->pts, st->time_base, (AVRational){1, 10000000LL});
>> +
>
>> + ctx->video->line_stride_in_bytes = avframe->linesize[0] < 0
>> + ? -avframe->linesize[0]
>> + : avframe->linesize[0];
>
> abs()?
>
copied from decklink code
>> + ctx->video->p_data = (avframe->linesize[0] < 0)
>> + ? (void *)(avframe->data[0] + avframe->linesize[0] * (avframe->height - 1))
>> + : (void *)(avframe->data[0]);
>
> Did you test with negative linesize? It looks like it will flip the
> video.
>
i did not test, but i leaved it for further processing
>> + NDIlib_send_send_video_async(ctx->ndi_send, ctx->video);
>> +
>> + if (ctx->last_avframe)
>> + av_frame_free(&ctx->last_avframe);
>> + ctx->last_avframe = avframe;
>
> This looks very strange. Can you explain?
>
> It looks to me like NDIlib_send_send_video_async() is only asynchronous
> for one frame, but will block if a second frame is given before the
> first one has been sent. Is that it? If so, a comment would be nice.
>
that exact behavior it has, i will add a notice/comment about that
>> + if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
>> + return ndi_write_video_packet(avctx, st, pkt);
>> + else if (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
>> + return ndi_write_audio_packet(avctx, st, pkt);
>> +
>
>> + return AVERROR(EIO);
>
> AVERROR_BUG
>
ok
>> +}
>> +
>> +static int ndi_setup_audio(AVFormatContext *avctx, AVStream *st)
>> +{
>> + struct ndi_ctx *ctx = (struct ndi_ctx *)avctx->priv_data;
>> + AVCodecParameters *c = st->codecpar;
>> +
>> + if (ctx->audio) {
>> + av_log(avctx, AV_LOG_ERROR, "Only one audio stream is supported!\n");
>
>> + return -1;
>
> AVERROR(EINVAL)
ok
>> + }
>> + if (c->channels != 2 && c->channels != 4 && c->channels != 8) {
>> + av_log(avctx, AV_LOG_ERROR, "Unsupported number of channels!"
>> + " Only stereo and 7.1 are supported.\n");
>
> Check channel layout too.
>
i will drop this check at all
>> + ctx->audio = (NDIlib_audio_frame_interleaved_16s_t *) av_mallocz(sizeof(NDIlib_audio_frame_interleaved_16s_t));
>
> Unnecessary (and thus harmful) cast.
>
> And missing failure check.
>
ok
>> + if (ctx->video) {
>> + av_log(avctx, AV_LOG_ERROR, "Only one video stream is supported!\n");
>
>> + return AVERROR(ENOTSUP);
>
> I suspect this library exists also for Windows and Macos, so you cannot
> use ENOTSUP. EINVAL.
>
ok
>> +
>> + if (!NDIlib_initialize()) {
>> + av_log(avctx, AV_LOG_ERROR, "NDIlib_initialize failed.\n");
>
> ret = AVERROR_EXTERNAL;
>
>> + } else {
>> + ctx->ndi_send = NDIlib_send_create(&ndi_send_desc);
>
>> + if (!ctx->ndi_send)
>> + av_log(avctx, AV_LOG_ERROR, "Failed to create NDI output %s\n", avctx->filename);
>
> ret = AVERROR_EXTERNAL;
>
>> + else
>> + ret = 0;
>
> ret is already 0.
i will check
--
Maksym Veremeyenko
More information about the ffmpeg-devel
mailing list