Direct Take 3.0: lighter, simpler, faster

Direct Take 3.0: lighter, simpler, faster

We started Direct Take back in 2015 as a toy project aiming to put ourselves into the shoes of our customers and see whether the Video SDK was as good as we advertised. The product eventually gained traction due to its vast encoding and I/O capabilities – things we could easily achieve with our technology stack. However, we kept adding more and more features, eventually creating a product that was difficult to both use and maintain.

After 3 months of user interface sketching, coding and intensive testing, we are releasing a brand new app: this time, focused on doing just one job, but doing it very well, and that is – synchronous recording of multiple video feeds.

In addition to supporting most encoders and file formats known to the industry, the new Direct Take offers NDI and SDI ingest (specifically via Blackmagic Design, AJA, DELTACAST, Bluefish444, Stream Labs, Magewell and YUAN), as well as direct virtual input from Video Transport – for remote production scenarios.

This design effort was led by Alex, our Video SDK Product Manager, who also developed the original Direct Take. I asked him a few questions about the new application.

Why is Direct Take 3.0 any better than the old one?

It is lighter, easier to use, quicker in operation, and more reliable – primarily due to a full-time testing process, proper development planning, and a solid approach to architecture. And it looks way better!

Why did we redesign the product from scratch?

The old product was too complicated to support and maintain. It was completely dependent on myself, since nobody really knew all of its features and capabilities – let alone where to look if any updates or fixes were needed. Also, it had a terribly stupid licensing system, which actually was the first thing we wanted to change, but we eventually arrived at the decision to change everything.

It just had too many things bundled together, and most of those things were used by one or two customers. It could output video to devices, it supported growing files playback, it had color correction plugins, CG, playlist management – it was an all-in-one solution, really. We simply implemented almost everything we had available in the Video SDK.

We’ve also learned that most users wanted to monitor and control recording sessions via the multiview, which was a secondary mode in the old product, we never seriously thought through its design.

So, this time, we went with a multiview-first approach – thinking about the things the user wanted to do and see when his recording session is in progress. We also focused on one specific job that most of our customers were using the product for: synchronous recording of 2 or more channels into one or several receivers.

Finally, we used MFormats to build it, which is frame-based. We needed to have full control over every frame in the pipeline.

What are the benefits of this frame-based approach?

There are two main benefits.

First, it’s critical for event logging: we know about each brake, we know about every problem along the pipeline, we know exactly what happens with each frame and when. We provide the user with a full error log of the recording session.

Second, this frame-based approach allows us to achieve full synchronicity when recording multiple feeds. We start ingesting once we've received the first frame from each input source, and we stop recording once we've written the final frame to each destination file. That’s it. We guarantee that the duration of each file is identical, and that the number of frames in all files is the same.

What jobs is Direct Take 3.0 designed to do?

The primary job is the initial recording of material that would be later used in post-production. That is, whenever there’s an event, a conference, a concert, content makers use DT to synchronously capture multiple camera feeds into one or several containers – those are usually MP4 with H.265 (used as proxy) and ProRes or XDCAM HD422 (used as the primary medium).

In today’s circumstances, when all things go with the prefix “remote”, DT is being used to capture the feeds delivered from remote contributors via Video Transport. In the new release, the output from Video Transport is available as a virtual source, which allows video to be received by Direct Take without the extra step of NDI encoding.

This time, the product is focused on a precise customer segment, and is designed to do one job well.

See also