Since HDR and its twin WCG (Wide Color Gamut) emerged as vital ingredients of Ultra HD contributing most to perceived visual quality improvements, operators and broadcasters have struggled to embrace them within their workflows. The two main related challenges are displaying HDR at optimum quality on TV sets that support it while continuing to deliver at SDR (Standard Dynamic Range) for those displays that do not.
The EBU (European Broadcasting Union) representing pubcasters has acted to address these two issues, especially the first, after its members complained that their video editing software was failing to help them tackle the complexities of HDR production. The first main complaint is that there are large numbers of new settings as well as numerous options for each one, making it hard for broadcasters to be sure they have made the best selections in each case. This is compounded by the second problem, lack of documentation on certain parameters and how they are implemented. The EBU points out this can lead to incorrect assumptions about the workflow and as a result poorer image quality.
To help mitigate these problems, the EBU is currently preparing best practice guidelines, including project files and test patterns for some of the most common operations. Its working group is also providing feedback to vendors of video editing software with suggestions on how they can improve their products for HDR handling.
The other main challenge is ensuring compatibility with older TV sets capable only of displaying SDR, with the risk that by migrating to HDR-only workflows and then converting down the experience for those legacy displays will actually get worse. This is a serious problem given that this applies to the great majority of TV sets, even including many still made today. In the US, 60 million 4K UHD TVs were shipped in 2018 but only 20% of those were HDR-capable. Taking account of the 338 million legacy TVs in the US as well, it is clear a mixed production and distribution environment will prevail for the foreseeable future.
This issue of compromising the quality of SDR BT.709 signals derived from HDR has held back acceptance of live HDR TV production so far and has been one of the great challenges for the field. One option is to establish parallel SDR/HDR production workflows where possible, but this greatly increases both the cost and complexity.
This problem has now been largely solved by taking account of the different display characteristics of SDR and HDR. For non-live it was less of a challenge simply because there is more time to perform color-grading and get displays right. But for live, by definition it has to be right at once and the system must also handle different sources such as graphics and slow-motion replays, as well as HDR cameras and SDR cameras, all in different signal formats, for blending into a single stream.
The basic challenge is to ensure that content looks exactly the same after conversion either way between HDR and SDR, irrespective of what camera it has been shot with. That means firstly that scenes shot in HDR should look the same after down conversion to SDR as they would if they had been captured by an SDR camera in the first place. Secondly, content shot in SDR should look as similar as possible when converted up to HDR as it would had it been shot with an HDR camera.
The underlying problem is that because SDR was originally developed for relatively dim CRT (cathode-ray tube) displays over two decades ago before LED came along, SDR images have brighter or more intense colors than nature when displayed on LED screens. To date, conversion between HDR and SDR has involved “display-light” techniques designed to preserve the look of the original format, which tends to yield pictures that look more color-saturated (brighter) on an HDR display than a native HDR camera would yield. Conversely, a display light conversion from an HDR camera to SDR would tend to look less saturated if taken from an SDR camera at the same scene.
These differences arise because each format has a different relationship between the light in the scene falling on the camera sensor and the light emitted by the display. This relationship is known as the OOTF (opto-optical transfer function).
Various broadcasters, including the BBC in the UK, have experimented quite successfully with an alternative to display-light conversion called ‘scene-light’ format conversion. This can be achieved using cameras that offer two simultaneous outputs, one HDR and the other SDR to the BT.709 format. The camera does this by adjusting its exposure for the HDR output and then deriving the SDR signal from that by applying a fixed gain to the linear scene-light from the sensor, followed by the Opto Electrical Transfer Function (OETF) to convert into electrical signals.
The BBC then calculated an additional intensity gain that should be applied during this conversion process before applying the OETF. It was this prior calculation that brought the down-converted output closely into line with native SDR and the process can be reversed for up conversion. This allowed the BBC to claim that this new workflow “greatly simplifies HDR production without compromising quality”. It argued that now this approach has been shown to work well on a large scale, it should lead to a significant increase in HDR TV production.
This work will surely be incorporated in the EBU’s new HDR workflow guidelines.