Video Testing for STB Manufacturers
This paper explores the challenges Video STB or Decoder Manufacturers face when assessing video quality. Assessing video quality ultimately depends on the customer’s reaction on their new display (PC, POD, TV, etc.).
In a relatively short time, MPEG encoding technology has completely changed the content delivery of television to the consumer. Whether through satellite, cable, Internet, DVD, or over-the-air, the efficiencies of MPEG compression have enabled a revolution within the industry. This has led to set-top box (STB) development. STB functions are continually advancing, but the customers’ reactions are usually centered on the quality of the picture.
STB designs go through a rigorous process to verify that he individual parts of the system – RF receivers, demodulators, MPEG processors, video and audio outputs, RF modulators, and decryption – work flawlessly. However, testing is usually relegated to a very short sample. This means that the test setup must be very simple and quick. Ideally, a single video quality test box will give an exact video quality objective number. However, based on the Video Quality Expert’s Group (VQEG) study, no model can completely assess video quality. Their conclusions, thus far, have been captured in the “VQEG Report on the Validation of Objective Models of Video Quality Assessment.”
In the interim, a hybrid approach is needed: one that lets you observe sequences using your eyes and continually improves the video quality scoring. The test setup simply stated is
- Start with a known video sequence.
- Inject a compressed video sequence into the network.
- Decode the processed video sequence.
- Capture the processed video sequence.
- Display a “gold standard” and processed video sequences.
- Bring in experts to subjectively vote.
Complexity arises as
- New Video Processing systems may need new equipment to playback the video sequences.
- The original and processed video sequences should be displayed in random orders.
- Expert viewers are expensive and do not produce repeatable results.
So a need arises for next generation video quality test equipment.
Each vendor builds unique test equipment to verify their new algorithms. So the first job is to debug the test equipment before it can be used to verify a new design. Debugging the test equipment can take as long if not longer than debugging the display equipment.
Video Quality Testing Methodologies
In the Subjective case, experts view multiple test clips and vote based on a quality scale (usually 1-5). The Test Equipment must play the video sequences in a pre-defined order and allow the expert time to vote. This is a tedious exercise and is not highly repeatable, but it is based on actual users so it is accurate.
In the Objective case, an algorithm “watches” the video sequence and measures the luminous, chrominance, blockiness, edge sharpness, and temporal changes. This data is then correlated with respect to the source video sequence, and a assessment is made about quality. In order to do this, care must be taken to spatially and temporally line up the data to prevent alignment errors from affecting the video score.
Regardless of the methodology, the video test setup must be repeatable. Ideally, the video scoring is also repeatable.
Video Quality Testing Equipment
To streamline the process, equipment for video quality testing needs to be defined, which can capture, play, and analyze any two video sequences. Further, as new input/output modules are continuously under development, the test equipment should use an open-architecture approach to ease upgradeability.
The following are the key attributes of a robust video quality testing tools.
- Allow a way to import video sequences regardless of their file type – i.e. AVI, QuickTime, Raw, Video Editor, MPEG, etc.
- Convert all video sequences to user-selectable resolution, bit depth, and color format so that they can be displayed multiple viewing modes on the same display.
- Serve video sequences to the decoder using DVB-ASI or an IP connection
- Capture the output of the decoder.
- Align the captured and played out video sequences both spatially and temporally.
- Allow multiple playing modes such as play, shuttle, jog, pause, zoom and pan.
- Apply reference and no-reference objective metrics to the video sequences to score the video.
- Log/graph the objective scores for easy analysis.
- Export pieces of video sequences to further analyze off-line.
Setting up Consistent Tests
Simplicity is the key to any test. Aim to change only one variable at a time.
- Select a known test sequence, either one of the VQEG sequence or something which is indicative of your broadcast material.
- Serve a video sequence to the decoder(s).
- Build out a network where you can inject known errors into it.
- Use a set-top box or video decompression unit. For ultimate video quality, select one with SDI output.
- Make the appropriate video-level adjustments to the analog outputs using a test pattern.
- Capture the video decompression output to compare it with the “Golden” video sequence.
- Compare the source and resultant video sequences on either 1 monitor with pan/scan or on 2 monitors
- If 2 monitors are used, then calibrate both monitors to the same black levels, contrast, etc. using a known test pattern.
- Store the results of the objective metrics, and the subjective MOS score.
In all of the following examples, the tests can be performed using software mockups of the actual hardware.
To simplify the work flow, any video sequence can be played while capturing another video sequence, thus, combining the video server, capture device, viewer and video analyzer into one unit. By doing this ClearView controls the test environment, which allows for automated, repeatable, quantitative video quality measurements.
Quantitative Picture Quality Evaluation
For more information about Video Clarity, please visit http://www.videoclarity.com.