OpenCV同态滤波
濾波原理
簡而言之,圖像的同態濾波是基于以入射光和反射光為基礎的圖像模型上的,如果把圖像函數F(x,y)表示為光照函數,即照射分量i(x,y)與反射分量r(x,y)兩個分量的乘積,那么圖像的模型可以表示為F(x,y)= i(x,y)*r(x,y)。通過對照射分量i(x,y)和反射分量r(x,y)的研究可知,照射分量一般反映灰度的恒定分量,相當于頻域中的低頻信息,減弱入射光就可以起到縮小圖像灰度范圍的作用;而反射光與物體的邊界特性是密切相關的,相當于頻域中的高頻信息,增強反射光就可以起到提高圖像對比度的作用。同態濾波器屬于頻域濾波器范疇,過濾低頻信息,放大高頻信息,達到壓縮圖像的灰度空間,并擴展對比度的目的,其傳遞函數表現為在低頻部分小于1,高頻部分大于1。具體細節參考Homomorphic filtering
傳遞函數:H(u,v)
原理流程圖:
濾波器設計
根據上述濾波器的特性與傳遞函數,不難選擇濾波器。表達式如圖所示。
這里參數c控制上升速率,參考值2,D0參數我們通過高斯低通濾波器來說明,表示的就是橫軸截止時的值,如圖所示。
參數D0的選擇,是比較講究的。對應高通濾波器,其決定過濾圖像能量的大小。如何自適應取值,我們后面討論。
OpenCV實現
數據處理
按照上述的原理流程圖,第一步需要對數據作相應的處理,然后進行ln取對數操作,這里往往容易出問題。OpenCV讀取的圖像數據f(x,y)正常是0-255范圍的,直接f(x,y)/255.0 + 1.0,然后取對數即可。此處,如果使用normalize函數,然后再加1,后面會發現最終結果g(x,y)是一片黑,原因是normalize對原圖的操作是a*f(x,y) + b, 根據傅里葉變換的性質就會發現問題出現在哪里了。
DFT
OpenCV中有實現dft的算法,并且在Sample/c++中給出了示例,注意其中將頻譜中心移到圖像中心的操作。這里補充一點是計算圖像的功率譜,還記得上述中提及的D0計算么? 功率譜計算公式如下:
巴特沃斯濾波器
根據求解的參數以及濾波器的傳遞函數,即可求解出濾波器,此時與DFT的結果一致,濾波器也包含實數與虛數兩部分
頻域濾波
調用mulSpectrums進行計算,得到濾波結果,此時注意將結果的頻域中心移回到圖像的左上角。IDFT
OpenCV中有實現idft的算法,注意設置參數DFT_SCALE,具體查看OpenCV的官方文檔說明,接著對結果計算e指數,再分解結果的實部和虛部,計算幅值,最后幅值減1.
數據恢復
使用normalize方法,將最大值置為255,并將數據格式轉換為CV_8U,得到最終結果。
代碼
濾波器:
//High-Frequency-Emphasis Filters
Mat Butterworth_Homomorphic_Filter(Size sz, int D, int n, float high_h_v_TB, float low_h_v_TB,Mat& realIm)
{
Mat single(sz.height, sz.width, CV_32F);
cout <<sz.width <<" "<< sz.height<<endl;
Point centre = Point(sz.height/2, sz.width/2);
double radius;
float upper = (high_h_v_TB * 0.1);
float lower = (low_h_v_TB * 0.1);
long dpow = D*D;
float W = (upper - lower);
for(int i = 0; i < sz.height; i++)
{
for(int j = 0; j < sz.width; j++)
{
radius = pow((float)(i - centre.x), 2) + pow((float) (j - centre.y), 2);
float r = exp(-n*radius/dpow);
if(radius < 0)
single.at<float>(i,j) = upper;
else
single.at<float>(i,j) =W*(1 - r) + lower;
}
}
single.copyTo(realIm);
Mat butterworth_complex;
//make two channels to match complex
Mat butterworth_channels[] = {Mat_<float>(single), Mat::zeros(sz, CV_32F)};
merge(butterworth_channels, 2, butterworth_complex);
return butterworth_complex;
}
DFT變換:
//DFT 返回功率譜Power
Mat Fourier_Transform(Mat frame_bw, Mat& image_complex,Mat &image_phase, Mat &image_mag)
{
Mat frame_log;
frame_bw.convertTo(frame_log, CV_32F);
frame_log = frame_log/255;
/*Take the natural log of the input (compute log(1 + Mag)*/
frame_log += 1;
log( frame_log, frame_log); // log(1 + Mag)
/*2. Expand the image to an optimal size
The performance of the DFT depends of the image size. It tends to be the fastest for image sizes that are multiple of 2, 3 or 5.
We can use the copyMakeBorder() function to expand the borders of an image.*/
Mat padded;
int M = getOptimalDFTSize(frame_log.rows);
int N = getOptimalDFTSize(frame_log.cols);
copyMakeBorder(frame_log, padded, 0, M - frame_log.rows, 0, N - frame_log.cols, BORDER_CONSTANT, Scalar::all(0));
/*Make place for both the complex and real values
The result of the DFT is a complex. Then the result is 2 images (Imaginary + Real), and the frequency domains range is much larger than the spatial one. Therefore we need to store in float !
That's why we will convert our input image "padded" to float and expand it to another channel to hold the complex values.
Planes is an arrow of 2 matrix (planes[0] = Real part, planes[1] = Imaginary part)*/
Mat image_planes[] = {Mat_<float>(padded), Mat::zeros(padded.size(), CV_32F)};
/*Creates one multichannel array out of several single-channel ones.*/
merge(image_planes, 2, image_complex);
/*Make the DFT
The result of thee DFT is a complex image : "image_complex"*/
dft(image_complex, image_complex);
/***************************/
//Create spectrum magnitude//
/***************************/
/*Transform the real and complex values to magnitude
NB: We separe Real part to Imaginary part*/
split(image_complex, image_planes);
//Starting with this part we have the real part of the image in planes[0] and the imaginary in planes[1]
phase(image_planes[0], image_planes[1], image_phase);
magnitude(image_planes[0], image_planes[1], image_mag);
//Power
pow(image_planes[0],2,image_planes[0]);
pow(image_planes[1],2,image_planes[1]);
Mat Power = image_planes[0] + image_planes[1];
return Power;
}
IDFT變換
void Inv_Fourier_Transform(Mat input, Mat& inverseTransform)
{
/*Make the IDFT*/
Mat result;
idft(input, result,DFT_SCALE);
/*Take the exponential*/
exp(result, result);
vector<Mat> planes;
split(result,planes);
magnitude(planes[0],planes[1],planes[0]);
planes[0] = planes[0] - 1.0;
normalize(planes[0],planes[0],0,255,CV_MINMAX);
planes[0].convertTo(inverseTransform,CV_8U);
}
頻譜移位
//SHIFT
void Shifting_DFT(Mat &fImage)
{
//For visualization purposes we may also rearrange the quadrants of the result, so that the origin (0,0), corresponds to the image center.
Mat tmp, q0, q1, q2, q3;
/*First crop the image, if it has an odd number of rows or columns.
Operator & bit to bit by -2 (two's complement : -2 = 111111111....10) to eliminate the first bit 2^0 (In case of odd number on row or col, we take the even number in below)*/
fImage = fImage(Rect(0, 0, fImage.cols & -2, fImage.rows & -2));
int cx = fImage.cols/2;
int cy = fImage.rows/2;
/*Rearrange the quadrants of Fourier image so that the origin is at the image center*/
q0 = fImage(Rect(0, 0, cx, cy));
q1 = fImage(Rect(cx, 0, cx, cy));
q2 = fImage(Rect(0, cy, cx, cy));
q3 = fImage(Rect(cx, cy, cx, cy));
/*We reverse each quadrant of the frame with its other quadrant diagonally opposite*/
/*We reverse q0 and q3*/
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
/*We reverse q1 and q2*/
q1.copyTo(tmp);
q2.copyTo(q1);
tmp.copyTo(q2);
}
主函數:
void homomorphicFiltering(Mat src,Mat& dst)
{
Mat img;
Mat imgHls;
vector<Mat> vHls;
if(src.channels() == 3)
{
cvtColor(src,imgHls,CV_BGR2HSV);
split(imgHls,vHls);
vHls[2].copyTo(img);
}
else
src.copyTo(img);
//DFT
//cout<<"DFT "<<endl;
Mat img_complex,img_mag,img_phase;
Mat fpower = Fourier_Transform(img,img_complex,img_phase,img_mag);
Shifting_DFT(img_complex);
Shifting_DFT(fpower);
//int D0 = getRadius(fpower,0.15);
int D0 = 10;
int n = 2;
int w = img_complex.cols;
int h = img_complex.rows;
//BHPF
// Mat filter,filter_complex;
// filter = BHPF(h,w,D0,n);
// Mat planes[] = {Mat_<float>(filter), Mat::zeros(filter.size(), CV_32F)};
// merge(planes,2,filter_complex);
int rH = 150;
int rL = 50;
Mat filter,filter_complex;
filter_complex = Butterworth_Homomorphic_Filter(Size(w,h),D0,n,rH,rL,filter);
//do mulSpectrums on the full dft
mulSpectrums(img_complex, filter_complex, filter_complex, 0);
Shifting_DFT(filter_complex);
//IDFT
Mat result;
Inv_Fourier_Transform(filter_complex,result);
if(src.channels() == 3)
{
vHls.at(2)= result(Rect(0,0,src.cols,src.rows));
merge(vHls,imgHls);
cvtColor(imgHls, dst, CV_HSV2BGR);
}
else
result.copyTo(dst);
}
總結
以上是生活随笔為你收集整理的OpenCV同态滤波的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 银行死期利息多少
- 下一篇: 借呗就算提前还款当月利息还是要给的么