【IP101】图像质量评价体系:从主观评价到客观度量的完整解析
图像质量评价详解 🔍
图像质量评价就像是数字世界的"品质鉴定师"!通过各种"鉴定工具"和"评价标准",我们可以对图像质量进行专业评估,就像鉴定师对艺术品进行专业鉴定一样。让我们一起来探索这个神奇的图像"鉴定工作室"吧!
目录
- 图像质量评价详解 🔍
- 目录
- 1. 峰值信噪比(PSNR)
- 2. 结构相似性(SSIM)
- 3. 均方误差(MSE)
- 4. 视觉信息保真度(VIF)
- 5. 无参考质量评价
- 注意事项
- 总结
- 参考资料
1. 峰值信噪比(PSNR)
PSNR就像是"信噪比测量仪",用于衡量图像失真程度。它通过计算原始图像和失真图像之间的均方误差(MSE),然后转换为分贝(dB)值来表示。
数学表达式:
P S N R = 10 ⋅ log 10 ( M A X I 2 M S E ) PSNR = 10 \cdot \log_{10}\left(\frac{MAX_I^2}{MSE}\right) PSNR=10⋅log10(MSEMAXI2)
其中:
- M A X I MAX_I MAXI 是图像最大像素值(通常是255)
- M S E MSE MSE 是均方误差
代码实现(C++):
double compute_psnr(const Mat& src1, const Mat& src2) {CV_Assert(src1.size() == src2.size());CV_Assert(src1.type() == src2.type());Mat diff;absdiff(src1, src2, diff);diff.convertTo(diff, CV_64F);multiply(diff, diff, diff);double mse = mean(diff)[0];if(mse < EPSILON) return INFINITY;double max_val = 255.0; // Assume 8-bit imagereturn 20 * log10(max_val) - 10 * log10(mse);
}
代码实现(Python):
def compute_psnr(img1: np.ndarray, img2: np.ndarray) -> float:"""计算峰值信噪比(PSNR)Args:img1: 第一张图像img2: 第二张图像Returns:float: PSNR值(dB)"""# 确保图像大小相同assert img1.shape == img2.shape# 计算均方误差(MSE)mse = np.mean((img1.astype(np.float32) - img2.astype(np.float32)) ** 2)# 避免除以0if mse == 0:return float('inf')# 计算PSNRmax_pixel = 255.0psnr = 20 * np.log10(max_pixel) - 10 * np.log10(mse)return psnr
2. 结构相似性(SSIM)
SSIM就像是"结构相似度分析仪",用于评估图像结构保持程度。它不仅考虑像素值的差异,还考虑了图像的结构信息。
数学表达式:
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 ) SSIM(x,y) = \frac{(2\mu_x\mu_y + c_1)(2\sigma_{xy} + c_2)}{(\mu_x^2 + \mu_y^2 + c_1)(\sigma_x^2 + \sigma_y^2 + c_2)} SSIM(x,y)=(μx2+μy2+c1)(σx2+σy2+c2)(2μxμy+c1)(2σxy+c2)
其中:
- μ \mu μ 是均值
- σ \sigma σ 是标准差
- c 1 , c 2 c_1, c_2 c1,c2 是常数
代码实现(C++):
double compute_ssim(const Mat& src1,const Mat& src2,int window_size) {CV_Assert(src1.size() == src2.size());CV_Assert(src1.type() == src2.type());// 转换为浮点数Mat img1, img2;src1.convertTo(img1, CV_64F);src2.convertTo(img2, CV_64F);// 计算局部统计量Mat mu1, mu2, sigma1, sigma2;compute_local_stats(img1, mu1, sigma1, window_size);compute_local_stats(img2, mu2, sigma2, window_size);// 计算协方差Mat mu1_mu2, sigma12;multiply(img1, img2, sigma12);filter2D(sigma12, sigma12, CV_64F, create_gaussian_kernel(window_size, window_size/6.0));multiply(mu1, mu2, mu1_mu2);sigma12 -= mu1_mu2;// 计算SSIMdouble C1 = (K1 * 255) * (K1 * 255);double C2 = (K2 * 255) * (K2 * 255);Mat ssim_map;multiply(2*mu1_mu2 + C1, 2*sigma12 + C2, ssim_map);Mat denom;multiply(mu1.mul(mu1) + mu2.mul(mu2) + C1,sigma1 + sigma2 + C2, denom);divide(ssim_map, denom, ssim_map);return mean(ssim_map)[0];
}
代码实现(Python):
def compute_ssim(img1: np.ndarray, img2: np.ndarray,window_size: int = 11) -> float:"""计算结构相似性(SSIM)Args:img1: 第一张图像img2: 第二张图像window_size: 窗口大小Returns:float: SSIM值(0-1之间,越大越好)"""# 确保图像大小相同assert img1.shape == img2.shape# 转换为浮点数img1 = img1.astype(np.float32)img2 = img2.astype(np.float32)# 创建高斯窗口window = gaussian_filter(np.ones((window_size, window_size)),sigma=1.5)window = window / np.sum(window)# 计算均值mu1 = signal.convolve2d(img1, window, mode='valid')mu2 = signal.convolve2d(img2, window, mode='valid')# 计算方差和协方差mu1_sq = mu1 * mu1mu2_sq = mu2 * mu2mu1_mu2 = mu1 * mu2sigma1_sq = signal.convolve2d(img1 * img1, window, mode='valid') - mu1_sqsigma2_sq = signal.convolve2d(img2 * img2, window, mode='valid') - mu2_sqsigma12 = signal.convolve2d(img1 * img2, window, mode='valid') - mu1_mu2# SSIM参数C1 = (0.01 * 255) ** 2C2 = (0.03 * 255) ** 2# 计算SSIMssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / \((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2))return np.mean(ssim_map)
3. 均方误差(MSE)
MSE就像是"误差测量尺",用于计算两个图像之间的像素差异。它是PSNR计算的基础,也是最基本的图像质量评价指标。
数学表达式:
M S E = 1 M N ∑ i = 1 M ∑ j = 1 N [ I ( i , j ) − K ( i , j ) ] 2 MSE = \frac{1}{MN}\sum_{i=1}^M\sum_{j=1}^N[I(i,j) - K(i,j)]^2 MSE=MN1i=1∑Mj=1∑N[I(i,j)−K(i,j)]2
其中:
- M , N M,N M,N 是图像尺寸
- I , K I,K I,K 是原始图像和失真图像
代码实现(C++):
double compute_mse(const Mat& src1, const Mat& src2) {CV_Assert(src1.size() == src2.size());CV_Assert(src1.type() == src2.type());Mat diff;absdiff(src1, src2, diff);diff.convertTo(diff, CV_64F);multiply(diff, diff, diff);return mean(diff)[0];
}
代码实现(Python):
def compute_mse(img1: np.ndarray, img2: np.ndarray) -> float:"""计算均方误差(MSE)Args:img1: 第一张图像img2: 第二张图像Returns:float: MSE值"""# 确保图像大小相同assert img1.shape == img2.shape# 计算MSEmse = np.mean((img1.astype(np.float32) - img2.astype(np.float32)) ** 2)return mse
4. 视觉信息保真度(VIF)
VIF就像是"视觉保真度检测仪",它基于自然场景统计和人类视觉系统特性,评估图像质量。VIF考虑了图像中的信息量,以及这些信息在失真过程中的保留程度。
数学表达式:
V I F = ∑ j ∈ subbands I ( C j N ; F j N ∣ s j N ) ∑ j ∈ subbands I ( C j N ; E j N ∣ s j N ) VIF = \frac{\sum_{j\in\text{subbands}} I(C_j^N;F_j^N|s_j^N)}{\sum_{j\in\text{subbands}} I(C_j^N;E_j^N|s_j^N)} VIF=∑j∈subbandsI(CjN;EjN∣sjN)∑j∈subbandsI(CjN;FjN∣sjN)
其中:
- I I I 是互信息
- C j N C_j^N CjN 是参考图像子带
- F j N F_j^N FjN 是失真图像子带
- E j N E_j^N EjN 是噪声图像子带
代码实现(C++):
double compute_vif(const Mat& src1,const Mat& src2,int num_scales) {CV_Assert(src1.size() == src2.size());CV_Assert(src1.type() == src2.type());// 转换为浮点型Mat ref, dist;src1.convertTo(ref, CV_64F);src2.convertTo(dist, CV_64F);double vif = 0.0;double total_bits = 0.0;// 多尺度分解for(int scale = 0; scale < num_scales; scale++) {// 计算局部统计量Mat mu1, mu2, sigma1, sigma2;compute_local_stats(ref, mu1, sigma1, 3);compute_local_stats(dist, mu2, sigma2, 3);// 计算互信息Mat g = sigma2 / (sigma1 + EPSILON);Mat sigma_n = 0.1 * sigma1; // 假设噪声方差Mat bits_ref, bits_dist;log(1 + sigma1/(sigma_n + EPSILON), bits_ref);log(1 + g.mul(g).mul(sigma1)/(sigma_n + EPSILON), bits_dist);vif += sum(bits_dist)[0];total_bits += sum(bits_ref)[0];// 降采样if(scale < num_scales-1) {pyrDown(ref, ref);pyrDown(dist, dist);}}return vif / total_bits;
}
代码实现(Python):
def compute_vif(img1: np.ndarray, img2: np.ndarray, num_scales: int = 4) -> float:"""计算视觉信息保真度(VIF)Args:img1: 第一张图像img2: 第二张图像num_scales: 尺度数量Returns:float: VIF值"""# 转换为浮点数img1 = img1.astype(np.float32)img2 = img2.astype(np.float32)vif = 0.0total_bits = 0.0# 多尺度分解for scale in range(num_scales):# 计算局部统计量mu1 = gaussian_filter(img1, sigma=1.5)mu2 = gaussian_filter(img2, sigma=1.5)sigma1 = gaussian_filter(img1**2, sigma=1.5) - mu1**2sigma2 = gaussian_filter(img2**2, sigma=1.5) - mu2**2sigma12 = gaussian_filter(img1*img2, sigma=1.5) - mu1*mu2# 计算互信息g = sigma2 / (sigma1 + 1e-10)sigma_n = 0.1 * sigma1 # 假设噪声方差bits_ref = np.log2(1 + sigma1/(sigma_n + 1e-10))bits_dist = np.log2(1 + g**2 * sigma1/(sigma_n + 1e-10))vif += np.sum(bits_dist)total_bits += np.sum(bits_ref)# 降采样if scale < num_scales-1:img1 = cv2.pyrDown(img1)img2 = cv2.pyrDown(img2)return vif / total_bits
5. 无参考质量评价
无参考质量评价就像是"独立鉴定师",不需要参考图像就能评估图像质量。它通过分析图像的自然统计特性来评估质量。
主要方法:
- 基于自然场景统计
- 基于图像特征分析
- 基于深度学习
代码实现(C++):
double compute_niqe(const Mat& src, int patch_size) {// 转换为灰度图Mat gray;cvtColor(src, gray, COLOR_BGR2GRAY);gray.convertTo(gray, CV_64F);// 提取局部特征vector<double> features;int stride = patch_size/2;for(int i = 0; i <= gray.rows-patch_size; i += stride) {for(int j = 0; j <= gray.cols-patch_size; j += stride) {Mat patch = gray(Rect(j,i,patch_size,patch_size));// 计算局部统计量Scalar mean, stddev;meanStdDev(patch, mean, stddev);// 计算偏度和峰度double m3 = 0, m4 = 0;Mat centered = patch - mean[0];Mat centered_pow3, centered_pow4;pow(centered, 3, centered_pow3);m3 = sum(centered_pow3)[0] / (patch_size * patch_size);pow(centered, 4, centered_pow4);m4 = sum(centered_pow4)[0] / (patch_size * patch_size);double skewness = m3 / pow(stddev[0], 3);double kurtosis = m4 / pow(stddev[0], 4) - 3;features.push_back(mean[0]);features.push_back(stddev[0]);features.push_back(skewness);features.push_back(kurtosis);}}// 计算特征均值和协方差int rows = static_cast<int>(features.size() / 4);Mat feat_mat(rows, 4, CV_64F);for(int i = 0; i < rows; i++) {for(int j = 0; j < 4; j++) {feat_mat.at<double>(i, j) = features[i*4 + j];}}Mat mean, cov;calcCovarMatrix(feat_mat, cov, mean, COVAR_NORMAL | COVAR_ROWS);// 计算与MVG模型的距离Mat diff = feat_mat - repeat(mean, feat_mat.rows, 1);Mat dist = diff * cov.inv() * diff.t();// 提取对角线元素Mat diagonal;diagonal.create(1, dist.rows, CV_64F);for(int i = 0; i < dist.rows; i++) {diagonal.at<double>(0, i) = dist.at<double>(i, i);}// 正确计算对角线元素的均值double mean_val = cv::mean(diagonal)[0];return sqrt(mean_val);
}
代码实现(Python):
def no_reference_quality_assessment(img: np.ndarray) -> float:"""无参考质量评估Args:img: 输入图像Returns:float: 质量分数(0-1之间,越大越好)"""# 计算图像梯度sobelx = cv2.Sobel(img, cv2.CV_64F, 1, 0, ksize=3)sobely = cv2.Sobel(img, cv2.CV_64F, 0, 1, ksize=3)# 计算梯度幅值gradient_magnitude = np.sqrt(sobelx**2 + sobely**2)# 计算局部对比度local_contrast = np.std(gradient_magnitude)# 计算图像熵hist = cv2.calcHist([img], [0], None, [256], [0, 256])hist = hist / np.sum(hist)entropy = -np.sum(hist * np.log2(hist + 1e-10))# 计算噪声水平noise = np.std(img - cv2.GaussianBlur(img, (5, 5), 0))# 综合评分score = (local_contrast * 0.4 + entropy * 0.3 + (1 - noise/255) * 0.3)return np.clip(score, 0, 1)
注意事项
-
图像预处理
- 在进行质量评价前,确保输入图像已经过适当的预处理(如去噪、对齐等)
- 对于彩色图像,建议先转换为灰度图再进行评价
- 注意图像尺寸的一致性,必要时进行缩放
-
评价指标选择
- PSNR适合评估压缩质量,但对结构失真不敏感
- SSIM更适合评估结构相似性,计算量较大
- MSE计算简单但与人眼感知相关性较差
- VIF计算复杂但更符合人眼感知
- 无参考评价适合实时应用,但准确性较低
-
性能优化
- 对于大尺寸图像,考虑使用ROI(感兴趣区域)进行评价
- 可以使用多线程或GPU加速计算
- 对于实时应用,可以降低采样率或使用快速算法
-
实际应用建议
- 根据具体应用场景选择合适的评价指标
- 可以组合多个指标进行综合评价
- 定期验证评价结果的准确性
- 考虑评价指标的计算效率
总结
图像质量评价就像是数字世界的"品质鉴定师"!通过客观评价指标、主观评价方法和无参考质量评价等"鉴定工具",我们可以对图像质量进行专业评估。在实际应用中,需要根据具体场景选择合适的"鉴定方案",就像鉴定师根据不同艺术品选择不同的鉴定方法一样。
记住:好的图像质量评价就像是一个经验丰富的"鉴定师",既要准确评估,又要考虑实际应用需求!🔍
参考资料
- Wang Z, et al. Image quality assessment: from error visibility to structural similarity[J]. TIP, 2004
- Sheikh H R, et al. A statistical evaluation of recent full reference image quality assessment algorithms[J]. TIP, 2006
- Mittal A, et al. No-reference image quality assessment in the spatial domain[J]. TIP, 2012
- Zhang L, et al. FSIM: A feature similarity index for image quality assessment[J]. TIP, 2011
- OpenCV官方文档: https://docs.opencv.org/
- 更多资源: IP101项目主页