Case Study – Monitoring in a Broadcast Environment
Consumers of broadcast-grade video have high expectations for virtually perfect video quality. They also want an endless supply of customizable channels made specifically for them.
Program originators are dividing and subdividing their content across a variety of channels to capture targeted demographics. The broadcasters sell advertising based on their demographics and market share. All of this is coming at a price. As the channel count increases, so does the complexity.
The broadcaster must weigh increased channel count versus quality. To this end, broadcasters are becoming more sophisticated in developing company-wide quality assessment strategies, which include monitoring, reporting, and trouble-shooting. The broadcaster, not only wants to know that an error occurred, but wants to know how it affected their customers’ experience, and they want a solution in real-time.
A standard program originator owns and operates 5-10 channels and each channel may need to be formatted into multiple resolutions and customized for different regions around the world. The channel count can easily reach 100.
Before the digital explosion, the program originator uplinked the 1 channel that they operated to a satellite and they were done. Switching to a digital infrastructure afforded each program originator more space, more tools, and more end user display options for their video content. This allowed them to segment their programming towards specific demographics – for instance 18-34 year old males or Chinese speaking females under 48 years old. This allowed segmented advertising and thus, more potential revenue.
With an ever increasing channel count, the program originator is now faced with more testing and monitoring. Today, they monitor the data streams looking for bit-errors, no audio, no video, lack of captioning, lack of data codes, etc. All or none of these may affect the video quality. To assess the video quality, they display their content at various points within their facility on a wall of monitors (known as a video wall) and hire people to monitor/judge. Of course, with multiple channels being monitored simultaneously, errors will be missed. How can a person view and listen to multiple points at the same time?
Supervising and monitoring all of this becomes a real challenge. Program originators would like to automate this process. This paper will show how multiple program originators have attacked this problem using tools from Video Clarity.
Pictorial Work Flow
The above picture illustrates a simplified broadcast WorkFlow. It applies regardless of the type of broadcaster:
- Program Originator (NBC, HBO, BBC),
- Affiliate/Owned-and-Operated (O&O) broadcaster (KRON, WGN),
- Re-broadcaster (JSkyB, Comcast, AT&T).
The points marked with an “X” are examples of points that can be monitored.
In many cases, 4 different points are monitored:
- Before the Encoder (compression device)
- After the Encoder (compression device)
- After transmission (network – satellite, microwave, fiber)
- The affiliates or re-broadcasters signal (satellite, IPTV, cable)
This means that 1 human might be judging 4 different places and if they see something wrong, they “fail-over” to the other feed (duplicated head end structure) so they are truly looking at 8 feeds. You will notice in the above picture that all of the feeds are duplicated. The idea is that redundant paths will not both fail at the same time in the same way.
When the program originator is operating 5-10 channels, this means a video wall with up to 40-80 different images, and one operator watching them for 8 hours a day.
Video Clarity RTM Solution
Given that many channels errors will occur. Video Clarity developed a real-time monitor (RTM) that compares 2 points along the transmission path. It aligns the audio and video and generates alerts when:
- Video Quality drops
- Audio Quality drops
- VANC data (closed caption/subtitle, parental controls, ratings) is not complete
- A/V delay is not correct (lip-sync problem)
Figure 3: Hardware Video Quality Monitoring
RTM takes 2 SDI feeds and aligns the two feeds (we will say reference and processed).
The patent-pending alignment looks for areas with temporal disturbance.
- scene changes,
- fades to black, or
- fast motion changes.
- talking to silence,
- talking to music, or
- music to silence.
Regardless, it is not the magnitude of the change as much as the difference from average. For example, a hockey game has a great deal of motion, but the motion is fairly similar. For video conferencing, a camera angle shift could be a huge motion relatively speaking.
Figure 4: Temporal Event in the Video and Audio
The reference and processed audio and video are analyzed separately, and an offset for each is calculated. This offset is the delay caused by processing. In theory the audio and video delay should be equal. If they are not, then the difference could create a lip-sync problem.
If the audio comes out early, then this will create a bigger disturbance as we are conditioned to wait longer for the sound. From basic physics, light travels faster than sound so we see the image first and then wait for the audio. If this ordering is reversed, it bothers us.
RTM can run via SNMP, socket commands, or from its interface (GUI) shown below. All of the data is logged and alerts are generated.
Figure 5: RTM GUI
The alerts can be used in a variety of ways:
- Automatically switch from the A to the B (alternate) feed
- Aid the operator when deciding to do a manual “fail over”
- The error can be saved (logged) for later analysis to prevent future issues
RTM is currently being used by many high profile broadcasters. To get a demonstration, please contact Video Clarity or one of its channel partners (www.videoclarity.com/customers) or visit us at one of the shows listed at www.videoclarity.com.