大津阈值分割(OSTU)
文章首發于我的個人博客
大津法是一種灰度圖像自適應閾值分割算法,是日本學者Ostu于1979年提出,又稱類間方差閾值分割法。大津法根據圖像的灰度分布將圖像分為前景和背景兩部分,前景是我們分割出來的部分。前景和背景的分割值就是我們要通過類間方差法求出的閾值。
一、算法原理
設圖像灰度級為L,灰度級為i的像素點數為ni,那么直方圖分布為:
pi=ni/N,∑i=0L?1pi=1p_i = n_i /N, \sum_{i=0}^{L-1}p_i=1 pi?=ni?/N,i=0∑L?1?pi?=1
按灰度級用閾值t劃分為兩類:C_0=(0,1,…,t)和C_1=(t+1,t+2…,L-1)。因此C_0類和C_1類的概率級均值分別由一下各式給出:
w0=Pr(C0)=∑i=0kpiw_0=Pr(C_0)=\sum_{i=0}^{k}p_i w0?=Pr(C0?)=i=0∑k?pi?
w1=Pr(C1)=∑i=k+1L?1pi=1?w0w_1=Pr(C_1)=\sum_{i=k+1}^{L-1}p_i = 1-w_0 w1?=Pr(C1?)=i=k+1∑L?1?pi?=1?w0?
u0=∑i=0kiPr(i∣C0)=∑i=0kipi/w0u_0=\sum_{i=0}^{k}iPr(i|C_0)=\sum_{i=0}^{k}ip_i/w_0 u0?=i=0∑k?iPr(i∣C0?)=i=0∑k?ipi?/w0?
u1=∑i=k+1L?1iPr(i∣C1)=∑i=k+1L?1ipi/w1u_1=\sum_{i=k+1}^{L-1}iPr(i|C_1)=\sum_{i=k+1}^{L-1}ip_i/w_1 u1?=i=k+1∑L?1?iPr(i∣C1?)=i=k+1∑L?1?ipi?/w1?
uT=∑i=0L?1ipiu_T=\sum_{i=0}^{L-1}ip_i uT?=i=0∑L?1?ipi?
可以看出,對任何t值,下式都能成立:
wou0+w1u1=uT,w0+w1=1w_ou_0+w_1u_1=u_T,w_0+w_1=1 wo?u0?+w1?u1?=uT?,w0?+w1?=1
C_0和C_1類的方差可由下式求得:
σ02=∑i=0t(i?u0)2pi/w0\sigma_0^2=\sum_{i=0}^{t}(i-u_0)^2p_i/w_0 σ02?=i=0∑t?(i?u0?)2pi?/w0?
σ12=∑i=t+1L?1(i?u1)2pi/w1\sigma_1^2=\sum_{i=t+1}^{L-1}(i-u_1)^2p_i/w_1 σ12?=i=t+1∑L?1?(i?u1?)2pi?/w1?
由此便可定義,類內方差(within):
σw2=w0σ02+w1σ12\sigma_w^2=w_0\sigma_0^2+w_1\sigma_1^2 σw2?=w0?σ02?+w1?σ12?
類間方差(Between):
σB2=w0(u0?uT)2+w1(u1?uT)2=w0w1(u1?u0)2\sigma_B^2=w_0(u_0-u_T)^2+w_1(u_1-u_T)^2=w_0w_1(u_1-u_0)^2 σB2?=w0?(u0??uT?)2+w1?(u1??uT?)2=w0?w1?(u1??u0?)2
總體方差:
σT2=σB2+σw2\sigma_T^2=\sigma_B^2+\sigma_w^2 σT2?=σB2?+σw2?
何為類間方差?我們將灰度級L按t非為兩類,反應在直方圖上就是兩類灰度。我們要做的就是找到能使兩邊灰度差的最大的閾值,根據這個閾值對灰度圖像進行分割。因此我們要做的就是找到能使類間方差最大的t值。
二、類間方差推導
σB2=w0(μ0?μT)2+w1(μ1?μT)2=w0[μ0?w0μ0?w1μ1]2+w1[μ1?w0μ0?w1μ1]2=w0[(1?w0)μ0?w1μ1]2+w1[(1?w1)μ1?w0μ0]2=w0[w1μ0?w1μ1]2+w1[w0μ1?w0μ0]2=w0w1(μ0?μ1)2+w0w1(μ1?μ0)2=(μ1?μ0)2w0w1(w0+w1)=w0w1(μ1?μ0)2{\sigma_B}^2=w_0(\mu_0-\mu_T)^2+w_1(\mu_1-\mu_T)^2 \\ = w_0[\mu_0-w_0\mu_0-w_1\mu_1]^2+w_1[\mu_1-w_0\mu_0-w_1\mu_1]^2 \\ = w_0[(1-w_0)\mu_0-w_1\mu_1]^2+w_1[(1-w_1)\mu_1-w_0\mu_0]^2 \\ =w_0[w_1\mu_0-w_1\mu_1]^2+w_1[w_0\mu_1-w_0\mu_0]^2 \\ =w_0w_1(\mu_0-\mu_1)^2+w_0w_1(\mu_1-\mu_0)^2 \\ =(\mu_1-\mu_0)^2w_0w_1(w_0+w_1)\\ =w_0w_1(\mu_1-\mu_0)^2 σB?2=w0?(μ0??μT?)2+w1?(μ1??μT?)2=w0?[μ0??w0?μ0??w1?μ1?]2+w1?[μ1??w0?μ0??w1?μ1?]2=w0?[(1?w0?)μ0??w1?μ1?]2+w1?[(1?w1?)μ1??w0?μ0?]2=w0?[w1?μ0??w1?μ1?]2+w1?[w0?μ1??w0?μ0?]2=w0?w1?(μ0??μ1?)2+w0?w1?(μ1??μ0?)2=(μ1??μ0?)2w0?w1?(w0?+w1?)=w0?w1?(μ1??μ0?)2
三、c++實現(核心代碼)
auto tempImage = originImage.clone();std::vector<int> countPixel(256, 0);//直方圖灰度分布概率std::vector<float> pixelProb;std::vector<int> pixel(256);for(int i = 0; i < 256; i++){pixel[i] = i;}for(int i = 0; i < tempImage.rows; i++){for(int j = 0; j < tempImage.cols; j++){countPixel[tempImage.at<uchar>(i, j)]++;}}//計算直方圖分布float sum_pixel = tempImage.rows * tempImage.cols;for(auto iter = countPixel.begin(); iter != countPixel.end(); iter++){pixelProb.push_back(*iter / sum_pixel);}//類內方差std::vector<float> class_within_variance_vec;//類間方差std::vector<float> class_between_variance_vec;for(int k = 0; k < 256; k++){//類0的概率float class_0_prob = 0;//類1的概率float class_1_prob = 0;for(int i = 0; i <= k; i++){class_0_prob += pixelProb[i];}class_1_prob = 1 - class_0_prob;//std::cout<<"class_0_prob:"<<class_0_prob<<" class_1_prob:"<<class_1_prob<<std::endl;//類0(前景)灰度均值float class_0_average = 0;//類1(背景)灰度均值float class_1_average = 0;for(int i = 0; i <= k; i++){class_0_average += i * (pixelProb[i] / (class_0_prob+0.000001));}for(int i = k+1; i <= 255; i++){class_1_average += i * (pixelProb[i] / (class_1_prob+0.000001));}//類0(前景)方差float class_0_variance = 0;//類1(背景)方差float class_1_variance = 0;for(int i = 0; i <= k; i++){class_0_variance += std::pow((i-class_0_average),2)*(pixelProb[i]/(class_0_prob+0.000001));}for(int i = k+1; i <= 255; i++){class_1_variance += std::pow((i-class_1_average),2)*(pixelProb[i]/(class_1_prob+0.000001));}//類內方差float class_within_variance = class_0_prob*std::pow(class_0_variance,2) + class_1_prob*std::pow(class_1_variance,2);class_within_variance_vec.push_back(class_within_variance);//類間方差float class_between_variance = class_0_prob * class_1_prob * std::pow((class_1_average-class_0_average),2);class_between_variance_vec.push_back(class_between_variance);}auto max_value_iter = std::max_element(class_between_variance_vec.begin(), class_between_variance_vec.end());//得到類間方差最大值int k_value = std::distance(class_between_variance_vec.begin(), max_value_iter);std::cout<<"k_value:"<< k_value<<" max_var:"<<*max_value_iter<<std::endl;/* auto min_value_iter = std::min_element(class_within_variance_vec.begin(), class_within_variance_vec.end());int kk_value = std::distance(class_within_variance_vec.begin(), min_value_iter);std::cout<<"kk_value:"<< kk_value<<" max_var:"<<*min_value_iter<<std::endl;*/for(int i = 0; i < tempImage.rows; i++){for(int j = 0; j < tempImage.cols; j++){if(tempImage.at<uchar>(i, j) <= k_value)tempImage.at<uchar>(i, j) = 0;elsetempImage.at<uchar>(i, j) = 255;}}四、效果對比
下圖從左到右依次為,原灰度圖、OpenCV提供OSTU算法分割結果、自己實現的OSTU分割效果。
下圖為調用matlab函數實現的大津法閾值分割效果
必不可少的Lenna。
總結
以上是生活随笔為你收集整理的大津阈值分割(OSTU)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Log4j.properties配置详解
- 下一篇: 算法——排序——堆排序图解动画