Close
Close

Published

Does SRT have drawbacks or is the darling protocol untouchable?

Hanging somewhere in the intermediary between two significant industry trade shows on either side of the pond – NAB and IBC – we thought it was about time to revisit the hot topic of Secure Reliable Transport (SRT) this week. A few small SRT announcements have surfaced since the open source protocol stole the spotlight in Vegas for the second year running, although nothing we consider earth-shattering. Someone might surprise us with an SRT-related announcement at next week’s Anga show, but that’s unlikely at a small cable event.

A little over a year has passed since SRT achieved one of its most significant deployments to date, with ESPN rolling out SRT-equipped devices to 14 athletic conferences to produce over 2,200 events via low-cost internet connections, replacing traditional satellite uplink services and resulting in cost savings of somewhere between $8 million to $9 million. If ESPN can achieve cost savings on this scale for relatively low-key events, imagine the possibilities for major scale live occasions – cash which can ultimately be invested elsewhere in improving the viewer experience.

But with streaming industry pioneers like Netflix and YouTube delivering HTTP content over CDNs to millions of viewers without a helping hand from SRT, what’s all the fuss about? A whitepaper from broadcast video vendor Haivision, a founding member of the SRT Alliance, essentially aims to debunk the myth that HTTP streaming technology using RTMP is the be-all and end-all for OTT video. In fact, incurring delays as high as 30 seconds is not uncommon in HTTP streaming, caused primarily by a multitude of pressing steps and various buffers along the signal path.

In addition, Haivision warns that Transmission Control Protocol (TCP), the standard used in delivering HTTP, can cause a sharp spike in delays as TCP requires that every last packet of a stream is delivered to the end user in the exact original order. This ultimately means TCP perpetually attempts to send missing data as there is no capability to skip over bad bytes.

A third and final catch of HTTP relates to the manner in which TCP drops packet transmission rates when congestion occurs. “While this behavior is good for reducing overall congestion in a network, it is not appropriate for a video signal, which cannot survive a drop in speed below its nominal bit rate,” it warns.

Haivision naturally goes on to sell a dream-like scenario of simultaneously reducing latency and costs, and it’s difficult to pick holes in the technology given the evidence at hand and frenzied adoption of SRT since the technology was open sourced two years ago.

From the negatives of TCP to the benefits of SRT now. Haivision lists a total of 12 specific features which differentiate SRT from other video stream delivery formats. Of course, being open source and therefore non-proprietary is the key advantage, while handling long network delays, supporting multiple stream types as well as multiple simultaneous streams are primary benefits. Avoiding the risks and costs of single-vendor lock-in syndrome is right up there with the most attractive prospect for SRT adopters – with a rapidly growing community.

Negative reviews or damning sentiments relating to SRT are like endangered species, most likely given the protocol’s infancy and open source approach. But one downside we have stumbled across from the vendor perspective is the apparent lack of HEVC paths for SRT in existing encoders on the market, although apparently Teradek and Wowza are working on doing HEVC over SRT from a remote location as we speak – so watch this space.

The CTO of video encoding vendor Video Rx, Robert Reinhardt, recently relayed an interesting story at a Streaming Media event. “I have a client that is pushing from North America across the pond to Europe and using SRT to connect, it was having issues because it was a relatively initial implementation by Wowza, and I said, why are you using SRT? Why don’t you just go back to RTMP? The client replied saying it was having issues with RTMP too. So, I said if you’re having issues with both RTMP and SRT, I don’t think you can blame the transport, it sounds like something else is going on here, because if RTMP is timing out, then you probably have other network problems going on.” This implies a note of caution should be held by SRT adopters, that the protocol isn’t the holy grail of solving deeper issues within the network.

Another challenge is convincing major browsers to implement the protocol which would enable its use in client playback. “Even if Haivision gives a lot of resources to it, I don’t think it’s going to convince all the browser vendors to just magically insert this by next year into their product,” said Reinhardt.

A more trivial downside is that SRT already existed as an acronym in the video industry long before the low latency protocol came along, relating to an extension for subtitle computer files called SubRip, so an online search for information on the protocol could easily lead you astray to an entirely different technology stack.

Moving swiftly on now to how SRT has made a name for itself. The diagram below visualizes how an error is generated in the output signal of an uncorrected stream whenever a packet is lost (top), while Forward Error Correction (FEC) adds a constant amount of data to the stream to recreate lost packets, as shown in the middle. Then we have Automatic Repeat reQuest (ARQ) which retransmits lost packets upon request from the receiver, which prevents constant bandwidth consumption of FEC.

ARQ requires caching at the sending location (to temporarily store packets in case they need to be retransmitted) and a buffer at the receiving location to rearrange the packets into the correct order before they are sent along to the video decoder or other receivers, notes Haivision.

“The benefits are significant for both technology suppliers and users, greatly simplifying implementation and reducing costs, thereby improving product availability and helping to keep prices low. And, since every implementer uses the same code base, interoperability is simplified,” is probably a better conclusion for the whitepaper than the one it actually chose.

Close