Psnr Calculation Tool 4,4/5 4914 reviews

MSU Graphics & Media Lab (Video Group)

  • Why VQMT?
    • PRO Version (with command-line interface) benefits[Full Documentation, PDF]
  • Support
    • Metrics Info (VMAF, NIQE, PSNR, MSE, MSAD, SSIM, VQM, MSU Blurring/Blocking)
    • Plugins & Plugin SDK
  • Download & Purchase [Booklet about VQMT PRO, PDF]

PSNR is a lightweight, portable, handy, third-party software solution that was designed to provide you with an easy way to determine the peak signal-to-noise ratio (currently known as PSNR) between.

Metrics Info

This page describles metrics effectively implemented in MSU VQMT

Download free Purchase More info..

Please, don't hesitate to contact us in case of any question/problem. We are also open for feature requests

e-mail: video-measure@compression.ru

MSU VQMT metrics:

See also plug-ins metrics:

Basic information

In MSU VQMT since version 12.0 we consider the range of all inputs to be 0.1. Each channel will be brought to this range in accordance with the settings.

VQMT has setting Sample conversion which determines how the conversion of integer to 0.1 range will be performed. This setting has 2 values:

  • shift. We consider 2n, where n is sample bitness, is converted to 1. Note, that the maximum 1 is unreachable. In this mode real reachable maximum depends from the bitness of source image. For example, for 8-bit input it will be 255/256, for 16-bit input it will be 65535/65536. This method corresponds to some standards of HDR TV's implementation, and this is default for some metrics of earlier versions of VQMT. This method is bad for inputs with low bitness. For example, maximum of 1-bit monochrome image will be interpreted as 0.5, which is gray color, not white.
  • repeat tail. We consider 2n − 1, where n is sample bitness, is converted to 1. In this mode real reachable maximum is always 1, the range not depends from input bitness.

The real range of input could be also shrunk if RGB ↔ YUV conversion performed. PC conversion tables do not modify the range, but REC do.

PSNR

This metric, which is used often in practice, called peak-to-peak signal-to-noise ratio – PSNR.

,

where MaxErr – maximum possible absolute value of color components difference, w – video width, h – video height. Generally, this metric is equivalent to Mean Square Error, but it is more convenient to use because of logarithmic scale. It has the same disadvantages as the MSE metric.

In MSU VQMT you can calculate PSNR for all YUV and RGB components and for L component of LUV color space. Also, since VQMT 12 you can calculate over all YUV space or all RGB space, achieving a single value for 3 components. PSNR metric is easy and fast to calculate, but sometimes it is not appropriate to human's perception.

Since VQMT 12 all input samples are implicitly cast to interval 0.1, so MaxErr is always 1. In VQMT 11 and earlier PSNR metric could consider real maximum value that could appear after all sample conversions and transformations for MaxErr. VQMT 12 will be use same behavior in Legacy mode. To simulate this, you also could make the following steps (without Legacy mode):

  • Set sample conversion mode to 'Repeat',
  • Set RGB ↔ YUV table to PC range.

Modern PSNR metric is quite similar to PSNR256 metric of earlier VQMT.

Also, in the old version, there were metrics APSNR and APSNR256, which differed from the corresponding metrics by the method of calculating the average. In A- metrics it was arithmetic mean. In common PSNR, PSNR256 it was total PSNR (computing total MSE for all processed frames). In VQMT 12 both average values will be calculated simultaneously, so the need for A- metrics has disappeared.

Colors, in order of PSNR growing: red, yellow, green, blue, black (Note: larger PSNR means smaller the difference)

MSAD

The value of this metric is the mean absolute difference of the color components in the correspondent points of image. This metric is used for testing codecs and filters.

Delta

The value of this metric is the mean difference of the color components in the correspondent points of image. This metric is used for testing codecs and filters.

Note: Red color Xij > Yij, green color Xij < Yij

MSU Blurring Metric

This metric allows you to compare power of blurring of two images. If value of the metric for first picture is greater than for second, it means that second picture is more blurred than first.

Note: Red color – first image is more sharper, than second. Green color – second image is sharper than first.

MSU Blocking Metric

This metric was created to measure subjective blocking effect in video sequence. For example, in contrast areas of the frame blocking is not appreciable, but in smooth areas these edges are conspicuous. This metric also contains heuristic method for detecting objects edges, which are placed to the edge of the block. In this case, metric value is pulled down, allowing to measure blocking more precisely. We use information from previous frames to achieve better accuracy.

SSIM Index

SSIM Index is based on measuring of three components (luminance similarity, contrast similarity and structural similarity) and combining them into result value.

Note: Brighter areas correspond to greater difference.

There are 2 implementations of SSIM in our program: fast and precise. The fast one is equal to our previous SSIM implementation. The difference is that the fast one uses box filter, while the precise one uses Gaussian blur.

Notes:
  1. Fast implementation visualization seems to be shifted. This effect is caused by the sum calculation algorithm for the box filter. The sum is calculated over the block to the bottom-left or up-left of the pixel (depending on if the image is bottom-up or top-down).
  2. SSIM metric has two coefficients. They depend on the maximum value of the image color component. They are calculated using the following equations:
    • C1 = 0.01 * 0.01 * video1Max * video2Max
    • C2 = 0.03 * 0.03 * video1Max * video2Max
    where video1Max is the maximum value of a given color component for the first video, video2Max is the maximum value of the same color component for the second video. Maximum value of a color component is calculated in the same way as for PSNR:
    • videoMax = 255 for 8 bit color components
    • videoMax = 255 + 3/4 for 10 bit color components
    • videoMax = 255 + 63/64 for 14 bit color components
    • videoMax = 255 + 255/256 for 16 bit color components

MultiScale SSIM INDEX

MultiScale SSIM INDEX based on SSIM metric of several downscaled levels of original images. Result is weighted average of those metrics.

Note: Brighter areas correspond to greater difference.

Two algorithms are implemented for MultiScale SSIM – fast and precise, as for SSIM metric. The difference is that the fast one uses box filter, while the precise one uses Gaussian blur.
Notes:

  1. Because result metric is calculated as multiplication of several metric values below 1.0 visualization seems to be dark. Fast implementation visualization seems to be shifted. This effect is caused by the sum calculation algorithm for the box filter. The sum is calculated over the block to the bottom-left or up-left of the pixel (depending on if the image is bottom-up or top-down).
  2. Levels weights (0 corresponds to original frame, while 4 corresponds to higher level):
    • WEIGHTS[0] = 0.0448;
    • WEIGHTS[1] = 0.2856;
    • WEIGHTS[2] = 0.3001;
    • WEIGHTS[3] = 0.2363;
    • WEIGHTS[4] = 0.1333;
Psnr java

3-Component SSIM INDEX

3-Component SSIM Index based on region division of source frames. There are 3 types of regions – edges, textures and smooth regions. Result metric calculated as weighted average of SSIM metric for those regions. In fact, human eye can see difference more precisely on textured or edge regions than on smooth regions. Division based on gradient magnitude is presented in every pixel of images.

Note: More bright areas corresponds to greater difference.

Spatio-Temporal SSIM

The idea of this algorithm is to use motion-oriented weighted windows for SSIM Index. MSU Motion Estimation algorithm is used to retrieve this information. Based on the ME results, weighting window is constructed for every pixel. This window can use up to 33 consecutive frames (16 + current frame + 16). Then SSIM Index is calculated for every window to take into account temporal distortions as well. In addition, another spooling technique is used in this implementation. We use only lower 6% of metric values for the frame to calculate frame metric value. This causes larger metric values difference for difference files.

Note: Brighter blocks correspond to greater difference.

VQM

VQM uses DCT to correspond to human perception.

Note: Brighter blocks correspond to greater difference.

MSE

VMAF

VMAF (Video Multimethod Assessment Fusion) is modern reference metric developed by Netflix in cooperation with the University of Southern California. VQMT has full support of VMAF, which multiple configuration switches. You can use default model (vmaf_v061), see elementary features or load custom model from .pkl file. Note: you should have .model near .pkl in case of using custom model.

'Model preset' setting (in command line you should use '-set model_preset=..' after metric) describes what features or model you want to use. If you want to use model, select 'vmaf_v061' or 'custom'. If you want to view elementary vmaf features, select 'basic_features' or 'all_features'. If you want to see both: vmaf_v061 and all elementary features, select 'all'.

Specify path to custom model file in field 'Custom model (*.pkl)' (in command line you should use '-set custom_model_file=..'). To activate this setting, you should specify 'custom' in 'Model preset' setting.

VMAF can't perform visualization in case of sophisticated metric algorithm. But you can visualize some of basic VMAF components. If you turned visualization on, specify demanded algorithm in field 'Visualize algorithm'.

'Use phone model' ('-set phone_model=on/off/both' in command line) switch allows you to apply postprocessing, developed by Netflix for smartphones. If you select 'both' in this field, you will see results without this postprocessing and with it in one output. This settings affects only on model output (not on basic features).

'Disable clipping values' ('-set disable_clip=false/true' in command line) can turn off clipping result value to 0.100 (or other range, specified in model setting). This value affects only model output.

'Use multithreading' ('-set multithreaded=true/false' in command line) to turn off multithreading. If 'true', basic features will be calculated asynchronously. If you already have parallel workprocess, disable multithreading to achieve better performance.

See also: https://medium.com/netflix-techblog/toward-a-practical-perceptual-video-quality-metric-653f208b9652

NIQE

NIQE (Naturalness Image Quality Evaluator) is first VQMT no-reference image quality metric, developed in The University of Texas at Austin by Mittal, A., R. Soundararajan, and A. C. Bovik. This version uses reference model from authors. Visualization is currently not supported by this metric.

To know more about this metric refer to [1] Mittal, A., R. Soundararajan, and A. C. Bovik. 'Making a Completely Blind Image Quality Analyzer.' IEEE Signal Processing Letters. Vol. 22, Number 3, March 2013, pp. 209-212.

See also: http://gfx.cs.princeton.edu/pubs/Liu_2013_ANM/sa13.pdf

SI

This is no-reference metric, that calculates Spatial perceptual information. This metric measures complexity (entropy) of an input image. This metric represents simplest SI realization that takes standard deviation of sequence of pixel values (Y-component) of Sobel transformation of input image:

SI(X) = std[Sobel(X(x,y))]

The values are in 0.1. 0 – for simple (monotone) frame, 1 – for very complex frame.

Sobel is length of vector (Sobelh, Sobelv), where Sobelh and Sobelv are horizontal and vertical Sobel transformation. While calculating Sobel, the edge pixels will be excluded from calculation.

Visualization of this metric will show Sobel transformation of input image.

See also: ITU-T Recommendation P.910: Subjective video quality assessment methods for multimedia applications, 1999. – 37 p.

TI

This is no-reference metric, that calculates Temporal perceptual information. This metric measures complexity (entropy) of defference between consequent frames of input video. Crosscode for mac. This metric represents simplest TI realization that takes standard deviation of sequence of differences of corresponding pixel values of a frame and previous frame:

TI(V) = std[Vn(X(x,y))-Vn-1(X(x,y))]/max

The values are in 0.1. 0 – for simple (static) video, 1 – for video with very diverse frames. Also, value 0 for the first frame.

Visualization of this metric will show difference between frame and previous frame.

See also: ITU-T Recommendation P.910: Subjective video quality assessment methods for multimedia applications, 1999. – 37 p.

MSU Video Quality Measurement Tools

  • MSU Video Quality Measurement Tool(program for objective comparison)
    • Why VQMT?
      • PRO Version (with command-line interface) benefits[Full Documentation, PDF]
    • Support
      • Metrics Info (VMAF, NIQE, PSNR, MSE, MSAD, SSIM, VQM, MSU Blurring/Blocking)
      • Plugins & Plugin SDK
    • Download & Purchase [Booklet about VQMT PRO, PDF]
  • MSU Perceptual Quality Tool(program for subjective comparison)
e-mail: video-measure@compression.ru

Other resources

Video resources:
3D and stereo video
Projects on 3D and stereo video processing and analysis
  • 9-th report (105 movies)
  • Video Matting

MSU Video Quality Measurement tools
Programs with different objective and subjective video quality metrics implementation
  • MSU Video Quality Measurement Tool - objective metrics for codecs and filters comparison
  • MSU Human Perceptual Quality Metric - several metrics for exact visual tests
Codecs comparisons
Objective and subjective quality evaluation
tests for video and image codecs
  • 9-th MPEG4-AVC/H.264 Comparison
  • Codec Analysis for Companies:
  • 4-th MPEG4-AVC/H.264 Comparison
  • 3-rd MPEG4-AVC/H.264 Comparison
  • MPEG-2 Decoders Crash Test
  • Subjective Compar. of 4 Modern Codecs
  • 2-nd MPEG-4 AVC/H.264 Comparison
  • 1-st MPEG-4 AVC/H.264 Codecs Comparison
  • JPEG 2000 Image Codecs Comparison
  • MPEG-4 SP/ASP Codecs Comparison
Ext. link: x264 parameters efficiency comparison
Public MSU video filters
Here are available VirtualDub and AviSynth filters. For a given type of digital video filtration we typically develop a family of different algorithms and implementations. Generally there are also versions optimized for PC and hardware implementations (ASIC/FPGA/DSP). These optimized versions can be licensed to companies. Please contact us for details via video(at)graphics.cs.msu_ru.
  • MSU Denoising
  • MSU Deblocking
  • MSU Smart Brightness and Contrast
  • MSU Smart Sharpen
  • MSU Logo removal
  • MSU Deflicker
  • MSU Cartoonizer
  • MSU SmartDeblocking
Filters for companies
We are working with Intel, Samsung, RealNetworks and other companies on adapting our filters other video processing algorithms for specific video streams, applications and hardware like TV-sets, graphics cards, etc. Some of such projects are non-exclusive. Also we have internal researches. Please let us know via video(at)graphics.cs.msu_ru if you are interested in acquiring a license for such filters or making a custom R&D project on video processing, compression, computer vision.
  • Semiautomatic Objects Segmentation
  • Deblurring filter
  • Video Content Search
Video codecs projects
Different research and development
projects on video codecs
  • MSU Lossless Video Codec(Top!)
  • MSU Screen Capture Lossless Codec(Top!)
  • MSU MPEG-2 Video Codec
Other
Other information
  • License for commercial usage of MSU VideoGroup Public Software (please be careful: some soft like metrics has another license!)
Last updated:30-April-2020 Server size: 8069 files, 1215Mb (Server statistics)

Project updated by
Server Team and MSU Video Group

Project sponsored by YUVsoft Corp.

Project supported by MSU Graphics & Media Lab