HDR Insights Article 3: Understanding HDR Tone Mapping

HDR Insights Article 3: Understanding HDR Tone Mapping

 

In the previous article – HDR Transfer Functions, we discussed the transfer functions and how digital images are converted to light levels for display. This article discusses how the same HDR image can be displayed differently by different HDR devices.

What is HDR Tone Mapping?

Tone mapping is the process of adapting digital signals to appropriate light levels based on the HDR meta-data. This process is not simply applying the EOTF (Electro-Optical Transfer Function) on the image data but it is rather trying to map the image data with the display device capabilities using meta-data information. Since a broad range of HDR display devices are available in the market, each with their own Nits (i.e. ‘brightness’) range, correct tone mapping is necessary for a good user experience. Since the tone mapping is done based on the meta-data in the video stream, presence of correct meta-data is necessary.

Source footage can be shot at HDR with best of cameras and then mastered on high-end HDR mastering systems, but it still need to be displayed optimally on the range of HDR televisions available in the market. Tone mapping performs an appropriate brightness mapping of the content to device without significant degradation.

Need for HDR Tone Mapping

Let’s say an image is shot with peak brightness of 2000 Nits. If it is displayed on a television with 0-2000 Nits range, the brightness range will be exactly as shot in the raw footage. However, the results will be different on other devices:

High Dynamic Range Tone Mapping

 

Since tone mapping is a necessary operation to display PQ based HDR content on HDR display devices, the television needs to know the native properties of the content in terms of the brightness range used along with mastering system parameters. This information is conveyed in the form of HDR meta-data. After reading the HDR meta-data, display devices can decide the tone mapping parameters so that the transformed video lies optimally within the display range of the display device.

Next article will discuss the specific meta-data for HDR-10 and HDR-10+, two different implementation of the HDR. Stay tuned for that.

Article 2: Transfer functions

Definitions

cd/m2 – The candela (cd) is the base unit of luminous intensity in the International System of Units (SI); that is, luminous power per unit solid angle emitted by a point light source in a particular direction. A common wax candle emits light with a luminous intensity of roughly one candela.

Nits – A non-SI unit used to describe the luminance. 1 Nit = 1 cd/m2.

HDR – High Dynamic range. It is a technology that improves the brightness & contrast range in an image (upto 10,000 cd/m2)

SDR – Standard Dynamic range. It refers to the brightness/contrast range that is usually available in regular, non-HDR televisions usually with range of upto 100 cd/m2. This term came into existence after HDR was introduced

WCG – Wide Color Gamut. Color gamut that offer a wider range of colors than BT.709. DCI-P3 and BT.2020 are examples of WCG offering more realistic representation of images on display devices.

EOTF – electo-optical transfer function. A mathematical transfer function that describes how digital values will be converted to light on a display device.

OETF – optical-electro transfer function. A mathematical transfer function that describes how the light values will be converted to digital values typically within cameras.

OOTF – opto-optical transfer function. This transfer function compensates for the difference in tonal perception between the environment of the camera and that of the display.

PQ – PQ (or Perceptual Quantizer) is a transfer function devised to represent the wide brightness range (upto 10,000 Nits) in HDR devices.

HLG – HLG (or Hybrid Log Gamma) is a transfer function devised to represent the wide brightness range in HDR devices. HLG is quite compatible with existing SDR devices in the SDR range.

HDR Insights Article 2 : PQ and HLG transfer functions for HDR

HDR Insights Article 2 : PQ and HLG transfer functions for HDR

 

In the previous article HDR introduction, we discussed the benefits HDR (High Dynamic Range) brings about in terms of quality of the video. This article talks about how that is achieved.

To display the digital images on the screen, display devices need to convert the pixel values to corresponding light values. This process is usually non-linear and is called EOTF (Electro-Optical Transfer Function). Different types of “Transfer Functions” are supported in different display devices.

Regular HDTV display devices (SDR – Standard Dynamic Range – monitors) normally use BT.709 Gamma transfer function to convert the video signal into light. These monitors are primarily designed to display images with brightness range of up to 100 Nits (cd/m2).

 

High Dynamic Range – Transfer Functions (PQ & HLG)

 

HDR defines two additional transfer functions to handle this issue – Perceptual Quantizer (PQ) and Hybrid Log-Gamma (HLG). HDR PQ is an absolute, display-referred signal while HDR HLG is a relative, scene-referred signal. This means that HLG enabled display devices automatically adapts the light levels based on the content and their own display capabilities while PQ enabled display devices need to implement tone mapping to adapt the light levels. Display devices use content metadata to display PQ coded images. This can come once for the entire video stream (static) or for each individual shot (dynamic)

It is expected that under ideal conditions, dynamic PQ based transformation will achieve the best quality results at the cost of compatibility with existing display systems. Please see examples below:

HDR PQ Transformation

HDR – Signal to light mapping

The graph below describes the mapping of light levels for various transfer functions. Vertical axis shows the signal values on a scale of 0-1 with 0 being black and 1 being white. This is done to make the signal range, bit depth agnostic. Horizontal axis shows the light level in Nits of display device.

HDR - Signal to Light Mapping

Human beings are more sensitive to changes in darker region compared to changes in brighter regions. This property is also exploited in HDR systems providing more granularity in darker regions compared to brighter regions. The graph above depicts that light level range in darker region is represented by a larger signal value range compared to the brighter regions – meaning more granular representation in darker regions. While this is more evenly distributed for the BT.709 based displays, it become less granular for HDR displays in the brighter regions. In case of HLG, more than half of signal values are represented for light level between 0-60 Nits and the remaining signal values are represented in 60-1000 Nits range. Similarly, in case of PQ ST2084 based displays, approx. half of the signal values are represented for light level between 0-40 Nits and the remaining half of signal values are represented in 60-1000 Nits range.

According to the graph, HDR HLG is similar to BT.709 in lower brightness regions therefore offering a better compatibility with the existing SDR display devices. However, HDR PQ is quite different from BT.709. If we try to display the PQ HDR image on a SDR display, darker regions represented by PQ will invariably become brighter thereby reducing the contrast levels of the image, the result being a washed out image (see below)

HDR PQ Images

HLG based image looks much better on a SDR monitor:

HDR HLG Image

While PQ based transforms offers promise to display best quality results on HDR enabled monitors, in comparison to HLG, it requires proper tone mapping by display devices.

This topic will be discussed in our next blog article – Tone mapping.

Definitions

cd/m2 – The candela (cd) is the base unit of luminous intensity in the International System of Units (SI); that is, luminous power per unit solid angle emitted by a point light source in a particular direction. A common wax candle emits light with a luminous intensity of roughly one candela.

Nits – A non-SI unit used to describe the luminance. 1 Nit = 1 cd/m2.

HDR – High Dynamic range. It is a technology that improves the brightness & contrast range in an image (up to 10,000 cd/m2)

SDR – Standard Dynamic range. It refers to the brightness/contrast range that is usually available in regular, non-HDR televisions usually with range of up to 100 cd/m2. This term came into existence after HDR was introduced

WCG – Wide Color Gamut. Color gamut that offer a wider range of colors than BT.709. DCI-P3 and BT.2020 are examples of WCG offering more realistic representation of images on display devices.

EOTF – electro-optical transfer function. A mathematical transfer function that describes how digital values will be converted to light on a display device.

OETF – optical-electro transfer function. A mathematical transfer function that describes how the light values will be converted to digital values typically within cameras.

OOTF – opto-optical transfer function. This transfer function compensates for the difference in tonal perception between the environment of the camera and that of the display.

PQ – PQ (or Perceptual Quantizer) is a transfer function devised to represent the wide brightness range (up to 10,000 Nits) in HDR devices.

HLG – HLG (or Hybrid Log Gamma) is a transfer function devised to represent the wide brightness range in HDR devices. HLG is quite compatible with existing SDR devices in the SDR range.

 

HDR Insights Article 1: High Dynamic Range – Introduction & Benefits

HDR Insights Article 1: High Dynamic Range – Introduction & Benefits

 

An Introduction to High Dynamic Range (HDR)

The industry has been constantly working towards improving the user experience in terms of video content consumption. The efforts have been multi-pronged and one of the key areas so far has been to focus on higher resolutions. While ten years ago, HD was a big thing, 4K has now become a common resolution in many production workflows. 4K is also delivered to consumers in many scenarios though it needs to penetrate better especially on the broadcast side.

Industry is now at a stage where it needs to decide what will bring the next significant improvement in user experience. Some of the key contenders are:

  • Higher resolution. Going to 8K?
  • Higher frame rate. Use more of 50 fps or 100 fps?
  • Wider color range
  • HDR (High Dynamic Range)

Tests have been conducted by many organizations including IRT, EBU etc. and they conclude that HDR probably offers a higher improvement in quality of experience compared to other technological improvements.

Read more at https://tech.ebu.ch/docs/events/webinar061_uhd/presentations/EBU_UHD_Features_and_Tests_Driesnack.pdf

Benefits of High Dynamic Range (HDR)

The holy grail of quality is reproducing a video experience on user’s display device as close as possible to what a human being will perceive when they watch the same scene in nature with their eyes. While this still is an ambitious goal, HDR brings us closer to the reality by offering a wider brightness range (very close to human perception range) and thereby a more realistic experience.

It is known that human visual system (HVS) is more sensitive towards brightness than colors and for the same reason we have color spaces like YUV420, YUV422 that does subsampling for color but retain the brightness information for all the pixels.

Regular SDR (Standard Dynamic Range) monitors available in the market have a range of 1-400 Nits (cd/m2) while HDR allows the representation range of 0-10,000 Nits, which is a significantly wider range of brightness than offered by SDR devices. Currently, available HDR TVs have a range of up to approximately 2,000 Nits. Wider brightness range in HDR simply means that the brightness of each pixel can be more accurately represented rather than being transformed with a higher quantization factor resulting into inaccurate pixel representation (poor quality). The quality improvement with HDR is usually more visible in plain areas with gradients where minor degradation is easily perceived by human eyes.

In essence – HDR means more accurate pixels in terms of their brightness!

Read the next article on transfer functions and how they help in representing a wider brightness range for HDR

Definitions

cd/m2 – The candela (cd) is the base unit of luminous intensity in the International System of Units (SI); that is, luminous power per unit solid angle emitted by a point light source in a particular direction. A common wax candle emits light with a luminous intensity of roughly one candela.

Nits – A non-SI unit used to describe the luminance. 1 Nit = 1 cd/m2.

HDR – High Dynamic range. It is a technology that improves the brightness & contrast range in an image (up to 10,000 cd/m2)

SDR – Standard Dynamic range. It refers to the brightness/contrast range that is usually available in regular, non-HDR televisions usually with range of up to 100 cd/m2. This term came into existence after HDR was introduced

 

 

Importance of localization in File based QC systems

Importance of localization in File based QC systems

Performing Quality Control on broadcast content has always been a very important element of any broadcast workflow. With the advent of OTT and the high expectation of discerning viewers of today, QC has become a critical aspect of content processing workflows. QC is the secret recipe for verifying the visual/audio quality of great content and can be a deciding factor between a bad experience, a decent experience and an amazing user experience. The value added by the QC process to the entire content value chain from creation to post-production, to broadcast, till consumption cannot be ignored. The proliferation of video content across formats, mediums, and geographies makes manual QC nearly impossible. Automated QC tools have become a necessity.

Need Of Localization in QC Software

The content industry is spread across geographies and as such needs to cater to users across the globe. To keep up with this need, file based QC has to not only broaden but also deepen its reach. Majority of the QC tools are used by local operators and professionals. If a show is being aired in Japan, what do you think will be the primary means of communication between most of the stakeholders involved? In any country where English is not the first or the predominant language, this will always be an issue. QC systems will need an entire overhaul to better serve the local professionals. It’s not enough to simply provide training in the local language but every aspect of the software needs to be localized. Localization cannot be a half-measure; it needs to go the whole nine yards. A file-based QC software is a crucial support system for the providers to churn out engaging and flawless videos to develop and retain their user base. As such, the ‘users’ of a QC software are the professionals working and dependant on the software. Most people are comfortable in their mother tongue or natural language. They tend to think in that language and a localized version of the software will definitely be more accommodating of such psychological and subliminal needs.

Venera’s QC Systems come equipped with Localized UI

The Venera file-based QC systems – Pulsar (for on-premise) and Quasar (for cloud), are already moving in this direction as we have implement a localized UI system in some languages such as Japanese and Korean. A version of the software in the local language enables easier access and allows users to fully exploit the functionality of the software. Furthermore, the reporting system, the second aspect of the file based QC software, should also be localized. As you can imagine, a QC report needs to be shared not only within the organization but also with several stakeholders across organizations that may be local or spread across the globe. This necessitates a fully functional multilingual QC system that can ‘talk’ to stakeholders in several languages without loss of data or communication. Sound far-fetched? Think again! We have already implemented the localization framework to accommodate non-English languages. And Japanese and Korean languages have already been added, with more localized languages coming. For us, localization is not just a word, it’s the path ahead.

We would welcome to hear your thoughts on the importance of Localization in QC systems. Does this feature make your life easier? Any other languages that you would want us to include? Feel free to share your feedbacks in the comments section below.

If you want to have a deeper insight into our QC solutions (Pulsar & Quasar) with this localization capability, do get in touch with us or request a free trial at sales@veneratech.com. You can read more about our solutions at:

Pulsar – www.veneratech.com/pulsar

Quasar – www.veneratech.com/quasar

Dynamic scaling of  File QC on Cloud

Dynamic scaling of File QC on Cloud

The ability to dynamically scale up any process in the Cloud is the cornerstone of any native cloud-based solution, including cloud-based automated QC. And with the large and unpredictable pattern of content volume in the Cloud, this capability has become a necessity for cloud-based content workflows.

One of the key reasons for the large volume of the content in the Cloud is the notion of Multi-screen delivery. A large number of viewers are now watching content on their personal devices rather than their TVs. VOD and “binge-watching” are also common now even for the television content. Therefore, it has now become a routine occurrence with a growing number of content providers and OTT services to deliver content to various screens using the Internet.

This requires content providers to embrace the multi-bitrate Internet delivery in addition to the television delivery, resulting in multiple versions of the same content (at different bitrates), which drastically increases the content volume. Content now needs to be transcoded to multiple encoding profiles and delivered efficiently over web and television. While all this is happening, content providers also need to keep their workflows efficient as many of them are still in the process of establishing monetization models for their content.

The diagram below compares a pure television delivery with a multi-screen delivery. It is clear that in comparison with the television-only delivery, a lot more processing activities and much more data need to be delivered in the case of multi-screen delivery.

 Quasar- Dynamic Scaling

Since all the VOD content needs to be ultimately delivered over the web, it also makes sense to process this content in the Cloud rather than on-premise. Many operators are still processing the content on-premise or are using hybrid workflows to keep basic content preparation on-premise while performing transcoding/packaging operations in the Cloud before delivering to CDN. It is expected that more and more content providers will move to fully cloud-based workflows due to the following reasons:

-Efficient content transfer. Content multiplication and delivery to CDN directly off Cloud.
-Availability of tools by OVPs on Cloud in order to handle the multi-screen content processing and delivery
-Auto-scaling of processing resources on Cloud
-Infrastructure reliability on public Cloud platforms
-OPEX based business models

Auto-scaling of QC resources in the Cloud

File-based QC is a necessary process in every professional content workflow. With the advent of multi-screen delivery, content volumes have reached levels at which manual QC is no longer feasible. There is an urgent need to adopt Auto File QC tools in the workflows to ensure that required content quality is delivered to the viewers, simply because viewers today are not ready to compromise with the quality – whether it is television or VOD delivery on the Internet.

While content providers embrace the multi-screen delivery and adopt file QC systems, what complicates such cloud-based workflows is the unpredictable nature of the content volume, either during the day or around key events, and the need for timely delivery of that content, regardless of the drastic sudden increase in the volume.

To attain the above benefits, it’s essential that the right processing tools including file QC are used in the Cloud as OPEX, with the ability to exploit key Cloud capabilities such as auto-scaling. In the absence of auto-scaling, users will have to make large upfront investments in buying QC tools while the majority of that investment lies unutilized during low usage periods. And if such investments are not made, content producers/providers will have to deal with large content processing queues leading to delayed delivery of the content that may not be acceptable to the intended audience, or risk sending untested content to the viewers.

Auto-scaling of QC resources in the Cloud offers significant value to content providers both in terms of investment and SLA. This is a necessary capability for any file QC tool claiming to be native cloud QC.

It’s clear that traditional QC systems designed for on-premise cannot be used as effectively in the Cloud. This can lead to severe bottlenecks on the Cloud workflows leading to either sub-optimal QC or delayed content deliveries. It isn’t enough to say that “On-premise QC systems are compatible with Cloud storage and users can buy licenses to install on static VMs”. Users now need native Cloud QC systems for use in their Cloud workflows and Quasar fits the bill perfectly.

QUASAR – Native Cloud File QC

Quasar, a Native Cloud File QC service is the only native service in the market allowing users to ensure their content quality while exploiting full benefits of the Cloud. One of the key Quasar capabilities is auto-scaling, with which it automatically spins up additional QC processing resources as and when needed to eliminate the content queues. Moreover, Quasar is capable of spinning up these additional resources automatically and dynamically in the same region as the content. This capability allows for instant content validation, leading to a quick decision on the content delivery.

Auto-scaling-file-qc-on-cloudQuasar is available as a ‘SaaS service’ wherein Venera manages the entire infrastructure while users can use the system using its REST API. Quasar is also available as a ‘Private edition’ wherein users can set up the system inside their own VPC (Virtual Private Cloud). Quasar is available with monthly, annual or longer-term pay-as-you-go plans that users can choose from, depending on their content volumes. Read more about Quasar at www.veneratech.com/quasar.

Automated File QC System Update For Photosensitive Epilepsy (PSE)

Automated File QC System Update For Photosensitive Epilepsy (PSE)

Ofcom, the United Kingdom’s communications regulator, has announced an update to ITU-R BT.1702-0 which is now available in the form of ITU-T 1702-1. From January 1st, 2019, all TV programs must be tested using Photosensitive Epilepsy Algorithms to ensure that they comply with the revisions to the ITU Recommendation.

A complaint to Ofcom showed there were inconsistencies in the interpretation of the Broadcasting Code Harm and Offence section that details the requirements for PSE testing. The circumstances of the sequence that caused the complaint were very unusual and are very unlikely to occur naturally (the sequence was an edited effect). In this update, ITU has clarified the definition of a sequence of potentially harmful flashes.

 

Need For Photosensitive Epilepsy Test?

 

Photosensitive Epilepsy (PSE) is triggered by visual stimuli that overload the brain temporarily and cause a seizure. The triggers are commonly flashes or repeating changes between dark and light (such as stroboscopic effects or flash photography), some geometric patterns, and certain colors such as deeply saturated red. PSE affects around one in four thousand people, with the age group of seven to twenty years old being five times more likely to be susceptible than the rest.

PSE testing is mandatory in the UK and Japan. All content providers and broadcasters need to check for PSE compliance before the content is made available to consumers. Producer of the program, as well as the Broadcaster, may be liable for any action taken by Ofcom or a member of the public, for a breach of the Photosensitive Epilepsy requirements.

PSE testing is also used worldwide by content providers who create/repurpose content for these countries. This testing is growing in use in the US as well, particularly as more of the US-produced content is distributed internationally.

 

Automated File QC system for PSE Testing

 

Photosensitive Epilepsy testing can be performed along with other content checks using an Automated QC system. Considering the penalties involved with a violation of PSE guidelines, it is of paramount importance that a certified, reliable implementation is used for PSE testing. Some of the proprietary implementations available in the market are reported to be inaccurate. Using those implementations exposes content owners and broadcasters to potential penalties.

Venera’s Automated QC systems – Pulsar (for QC of on-premise content) and Quasar (for QC of cloud-based content) – use the Photosensitive Epilepsy implementation from Cambridge Research Systems, which is the de facto standard for Photosensitive Epilepsy testing, producing Cambridge-approved certifications. Both systems are already compliant with updated Ofcom guidelines and as a result, Pulsar/Quasar users are assured that if their content passes the QC inspection, they meet the latest Photosensitive Epilepsy standards.

About Pulsar Automated QC Solutions

 

Pulsar is the world’s fastest File based Automated QC system that is used by some of the largest Media companies, as well as a number of smaller boutique post houses and production companies. Pulsar’s fast performance, ease of use, and competitive pricing provide the user with the best ROI for a QC system for their operations. Pulsar provides a very rich range of features and detection parameters. Aside from support for all common video/audio formats, Pulsar supports many advanced capabilities such as support for validation and analysis of HDR, Adaptive bitrate content, Photosensitive Epilepsy (PSE), and IMF and DCP packages. Pulsar comes with ‘ready to use’ QC templates for some of the more popular platforms such as Netflix, iTunes, CableLabs, and DPP. Pulsar has been integrated with leading File Transfer as well as Media Asset Management solutions.

DownloadPulsar Brochure

 

About Quasar Automated QC Solutions

 

Quasar Automated Video QC System - Photosensitive Epilepsy

Quasar is the first ‘native’ cloud-based QC solution running on both AWS and Microsoft Azure cloud platforms. Aside from all the QC-specific features of Pulsar, Quasar is built from grounds up to take advantage of the Cloud architecture, such as usage-based pricing model, dynamic processor scalability, dynamic provisioning in regions local to the customer, high level of security, native support for cloud-based storage systems, and easy integration with other native cloud-based solutions . Quasar is ideal for media companies who have migrated to the cloud such as many of the major studios, broadcasters or production houses, or those who conduct business in the cloud such as OTTs (Over The Top content providers) and OVPs (Online Video Platforms).

DownloadQuasar Brochure

 

References