Video Testing for Encoder Manufacturers
Video Testing for Encoder Manufacturers
This paper explores the challenges Video Encoder (MPEG, JPEG, WAVLET Compression) Equipment Manufacturers face when assessing video quality. A compression system’s output video quality is a key component in determining the system’s performance.
For visual and audio data, some loss of quality can be tolerated without losing the essential nature of the data. By taking advantage of the limitations of the human sensory system, a great deal of space can be saved while producing an output which is nearly indistinguishable from the original. These lossy data compression methods typically offer a three-way tradeoff between compression speed, compressed data size and quality loss.
The question must be asked. How much data can be removed before the customer has a violent reaction. So testing is increasing important. Ideally, a single video quality test system will give an exact video quality objective number. However, based on the Video Quality Expert’s Group (VQEG) study, no model can completely assess video quality. Their conclusions, thus far, have been captured in the “VQEG Report on the Validation of Objective Models of Video Quality Assessment.”
In the interim, a hybrid approach is needed: one that lets you observe sequences using your eyes and continually improves the video quality scoring. The test setup simply stated is
- Start with a known video sequence.
- Compress the video sequence.
- Decode the processed video sequence.
- Capture the processed video sequence.
- Display the original and processed video sequences.
- Bring in experts to subjectively vote.
Complexity arises as
- New Video Processing systems may need new equipment to playback the video sequences.
- The original and processed video sequences need to be captured, aligned, and displayed in multiple viewing modes for analysis
- Expert viewers are expensive and do not produce repeatable results.
So a need arises for next generation video quality test equipment.
Each vendor builds unique test equipment to verify their new algorithms. So the first job is to debug the test equipment before it can be used to verify a new design. Debugging the test equipment can take as long if not longer than debugging the display equipment.
Video Quality Testing Methodologies
In the Subjective case, experts view multiple test clips and vote based on a quality scale (usually 1-5). The Test Equipment must play the video sequences in a pre-defined order and allow the expert time to vote. This is a tedious exercise and is not highly repeatable, but it is based on actual users so it is accurate.
In the Objective case, an algorithm “watches” the video sequence and measures the luminous, chrominance, blockiness, edge sharpness, and temporal changes. This data is then correlated with respect to the source video sequence, and a assessment is made about quality. In order to do this, care must be taken to spatially and temporally line up the data to prevent alignment errors from affecting the video score.
Regardless of the methodology, the video test setup must be repeatable. Ideally, the video scoring is also repeatable.
Video Testing Procedure
To streamline the process, equipment for video quality testing needs to be defined, which can capture, play, and analyze any two video sequences. Further, as new input/output modules are continuously under development, the test equipment should use an open-architecture approach to ease upgradeability.
The following are the key attributes of a robust video quality testing tools.
- Allow a way to import video sequences regardless of their file type – i.e. AVI, QuickTime, Raw, Video Editor, MPEG, etc.
- Convert all video sequences to user-selectable resolution, bit depth, and color format so that they can be displayed multiple viewing modes on the same display.
- Serve video sequences to the encoder and/or video processing unit using SDI, Component, DVB-ASI or DVI.
- Capture the output of the encoder or the encoder/decoder pair.
- Align the captured and played out video sequences both spatially and temporally.
- Allow multiple playing modes such as play, shuttle, jog, pause, zoom and pan.
- Apply reference and no-reference objective metrics to the video sequences to score the video.
- Log/graph the objective scores for easy analysis.
- Export pieces of video sequences to further analyze off-line.
Setting up Consistent Tests
Simplicity is the key to any test. Aim to change only one variable at a time.
- Select a known test sequence, either one of the VQEG sequence or something which is indicative of your broadcast material.
- Use serial digital video as the input to the video processor and/or video compression unit.
- Build out a network where you can inject known errors into it.
- Use a common set-top box or video decompression unit. For ultimate video quality, select one with SDI output.
- Capture the video decompression output to compare it with the input.
- Compare the source and resultant video sequences on either 1 monitor with pan/scan or on 2 monitors
- If 2 monitors are used, then calibrate both monitors to the same black levels, contrast, etc. using a known test pattern.
- Store the results of the objective metrics, and the subjective MOS score.
Video compression systems inevitably introduce artifacts as the bit rate lowers. Running a system side-by-side with a video reference is the most useful way to categorize and relate the severity and frequency of the artifacts. While looking at artifacts, the video test equipment should highlight edge noise, blocking & tiling, dark pictures, and tearing.
Please remember that when testing an encoder/ processing equipment manufacturer, tuning control mechanisms may need to change. Thus, the testing is not complete until the optimal calibration is done. Further, some equipment manufacturers will look better in some instances; while others look better in other instances.
Quantitative Picture Quality Evaluation
To simplify the work flow, any video sequence can be played while capturing another video sequence, thus, combining the video server, capture device, viewer and video analyzer into one unit. By doing this ClearView controls the test environment, which allows for automated, repeatable, quantitative video quality measurements.
Automated Pass/Fail Testing
Send a known sequence through a processing unit and network, record the output from a hardware decoder and compare this to a pre-recorded reference source – generating pass fail to a ClearView log file or CLI script.
The following ingredients provide the basis for useful video quality testing
- Good preparation
- Known test sequences
- Flexible video test equipment
- Methodical test plan
- Experienced judges
For more information about Video Clarity, please visit their website at http://www.videoclarity.com.