在前幾個章節我們都用Button的方式來教導大家如何讓OpenCV去執行,該方式可以讓你們較快速的學習如何再QT裡面達成我們要的指令,那接下來我們必須使用比較正規的做法去編寫你的GUI介面了,再目前很多程式軟體裡面,通常你都會看到menu選單,讓你可以下拉去點選你要的動作,今天這個章節將利用menu的方式,讓你將你要的功能給編寫出來。
Step3:簡單測試
Step1:建立一QT GUI Application
建立的方式與前幾個章節一樣,這邊將不再多做說明。開啟後一樣在pro檔案中將opencv函數進行include動作。
Step2:開始進行menu建立
於QT中要建立menu選單之動作其實非常簡單,於Design畫面中你可以看到Type Here,於此點選滑鼠左鍵兩下將可進入編輯新增畫面,於此你可以輸入該選單的名稱,如下圖所示。這裡於第一層輸入File第二層輸入Open File。並於Action Editor視窗中點選滑鼠右鍵,並點選Edit按鈕進行編輯。進入Edit Action介面之後,你將可以看到Text:顯示的文字、Object name:物件名稱等等,比較特別的就是Shortcut:這是你將可以使用熱鍵的方式進行該按鈕動作,本按鈕為開啟檔案,所以我將它設定為"Ctrl"+"O"。編輯完畢點選OK離開。編輯該按鈕動作,你可以點選右鍵,並點選Go To Slot進入。按鈕動作選取triggered()
新增之後你將可以在on_actionOpen_File_triggered(),中新增你要的動作注意:下列程式碼當中使用第二課中所用到之開啟檔案方式,因此相關include與UpdateImage及table標籤要記得加入才可以執行。相關程式碼如下
QString fileName = QFileDialog::getOpenFileName(this,
tr("Open Image"),".",tr("Image File (*.png *.jpge *.jpg *.bmp"));
image = cv::imread(fileName.toAscii().data());
result=image;
if(image.data){
cv::cvtColor(result,result,CV_BGR2RGB);
UpdateQImage(result);
}
執行後畫面如下,利用"Ctrl"+"O",將可以順利開啟該影像。結果如下。視窗標題"MainWindow"要如何修改呢?請直接修改windowTitle之值,如OpenCV With QT
Step4:將已經編寫好的檔頭及程式碼載入
在第三課我們已經介紹如何簡單編寫一些屬於我們自己的程式庫了,那在新的專案裡面要如何直接將他載入呢?首先將那兩個檔案複製到新專案之src資料夾當中在Headers與Sources中點選右鍵,點選"Add Existing Files..."。並分別加入cvcore.h及cvcore.cpp兩個檔案,加入後如下圖所示將會出現那兩個檔案。
此時會到QT_GUI_Opencv.pro檔案,將可發現QT已經自動幫你加入此兩個檔案到SOURCES及HEADERS裡面了。要使用此函式庫當然要將他include進來,回到mainwindow.hpp檔案裡面,新加入#include "cvcore.h",這樣你就已經將此函式庫成功加入了。
Step5:新增其他影像處理之menu
使用上述所教新增menu之方式,再度新增Process,及第二層Flip選單。相同新增一個triggered動作
執行結果如下
Step6:開始建立自己的按鈕動作(edit)
在step5中我們只是示範要如何將函數給放到menu上面去執行,從這邊開始將介紹依序建立一些你所要的功能。首先我們編輯edit,並在裡面新增Restore Origional image、Flip horizontally、Flip Upside Down及Contrast Brightness,四個動作。並分別對這些動作賦予所對應之動作。
相關使用函數如下:
Restore Origional image
該動作將原始影像回復,新增按鈕triggered之後在裡面編輯如下。
void MainWindow::on_actionRestore_Origional_Image_triggered()
{
result=image;
UpdateQImage(result);
}
Flip horizontally
水平翻轉
void MainWindow::on_actionFlip_Horizontally_triggered()
{
result=IPtool.flip(result,1);
UpdateQImage(result);
}
Flip Upside Down
垂直翻轉
void MainWindow::on_actionFlip_Upside_Down_triggered()
{
result=IPtool.flip(result,-1);
UpdateQImage(result);
}
Contrast Brightness
亮度/對比色彩調整上面水平翻轉功能,我們已經於cvcore中已經新增過了,所以不需要在新增,而Contrast Brightness在本程式中屬於位定義之函式,所以我們要在cvcore.h及cvcore.cpp中加入處裡的程式碼,其程式碼如下表所示
cvcore.h
// Contrast & brightness adjustment
cv::Mat contast_brightness(const cv::Mat &image, const double &alpha, const int &beta);
cv::Mat contast_brightness(const cv::Mat &image);cvcore.cpp
cv::Mat CVCore::contast_brightness(const cv::Mat &image, const double &alpha, const int &beta) {
image.convertTo(result, -1,alpha,beta);
return result;
}mainwindow.cpp
void MainWindow::on_actionContrast_Brightness_triggered()
{
result=IPtool.contast_brightness(result,1.5,100);
UpdateQImage(result);
}按照上述方式,將可以成功完成四個動作。
Step7:建立Filtering menu
接下來再新增幾項opencv處裡影像的方式,其新增內容如下所式
Filtrting ->Sharpen->Customeringed Filt->Smoothing->Homogeneous Blur->Gaussian Blur->Median Blur->Bilateral FilterSharpen
銳利化影像cvcore.h
// Sharpen image
cv::Mat sharpen(const cv::Mat &image);cvcore.cpp
cv::Mat CVCore::sharpen(const cv::Mat &image) {
cv::Mat kern = (cv::Mat_<char>(3,3) <<
0,-1,0,
-1, 5,-1,
0,-1,0);
filter2D(image, result, image.depth(), kern);
return result;
}mainwindow.cpp
void MainWindow::on_actionSharpen_triggered()
{
result=IPtool.sharpen(result);
UpdateQImage(result);
}
Customeringed Filt
自行定義一個濾鏡cvcore.h
cv::Mat filtering(const cv::Mat &image, const cv::Mat &kern); cvcore.cpp
cv::Mat CVCore::filtering(const cv::Mat &image, const cv::Mat &kern) {
filter2D(image, result, image.depth(), kern);
return result;
}mainwindow.cpp
void MainWindow::on_actionCustomeringed_Filt_triggered()
{
cv::Mat kern =(cv::Mat_<char>(3,3)<< 0,-1,0,-1,5,-1,0,-1,0);
result=IPtool.filtering(result,kern);
UpdateQImage(result);
}
Homogeneous Blur
同質模糊cvcore.h
// Homogenous Blur Filter
cv::Mat HomogenousBlur(const cv::Mat &image, const int &MaxKern);cvcore.cpp
cv::Mat CVCore::HomogenousBlur(const cv::Mat &image, const int &MaxKern) {
cv::blur(image,result,cv::Size(MaxKern,MaxKern),cv::Point(-1,-1));
return result;
}mainwindow.cpp
void MainWindow::on_actionHomogeneous_Blur_triggered()
{
result=IPtool.HomogenousBlur(result,5);
UpdateQImage(result);
}
Gaussian Blur
高斯模糊
cvcore.h
// Gaussian Blur Filter
cv::Mat myGaussianBlur(const cv::Mat &image, const int &MaxKern);cvcore.cpp
cv::Mat CVCore::myGaussianBlur(const cv::Mat &image, const int &MaxKern) {
cv::GaussianBlur(image,result,cv::Size(MaxKern,MaxKern),0,0);
return result;
}mainwindow.cpp
void MainWindow::on_actionGaussian_Blur_triggered()
{
result=IPtool.myGaussianBlur(result,5);
UpdateQImage(result);
}
Median Blur中質模糊cvcore.h
// Median Blur Filter
cv::Mat MedianBlur(const cv::Mat &image, const int &MaxKern);cvcore.cpp
cv::Mat CVCore::MedianBlur(const cv::Mat &image, const int &MaxKern) {
cv::medianBlur(image,result,MaxKern);
return result;
}mainwindow.cpp
void MainWindow::on_actionMedian_Blur_triggered()
{
result=IPtool.MedianBlur(result,5);
UpdateQImage(result);
}
Bilateral Filter
cvcore.h
// Gaussian Blur Filter
cv::Mat BilateralBlur(const cv::Mat &image, const int &MaxKern);cvcore.cpp
cv::Mat CVCore::BilateralBlur(const cv::Mat &image, const int &MaxKern) {
cv::bilateralFilter(image,result,MaxKern,MaxKern*2,MaxKern/2);
return result;
}mainwindow.cpp
void MainWindow::on_actionBilateral_Filter_triggered()
{
result=IPtool.BilateralBlur(result,5);
UpdateQImage(result);
}
這樣上述的影像處理步驟就已經結束了,其中每個函式中都有不同的參數,該參數之設置方式差異,將引響影像處理後的結果,因此要如何給定這些參數,將可使用horizontalSlider的方式來解決
filter2DConvolves an image with the kernel.C++: void filter2D(InputArray src, OutputArray dst, int ddepth, InputArray kernel, Point anchor =Point(-1,-1), double delta=0, int borderType=BORDER_DEFAULT )Python: cv2. filter2D(src, ddepth, kernel [, dst [, anchor [, delta [, borderType ] ] ] ]) → dstC: void cvFilter2D(const CvArr* src, CvArr* dst , const CvMat* kernel, CvPoint anchor=cvPoint(-1, -1))Python: cv. Filter2D(src, dst, kernel, anchor=(-1, -1) ) → NoneParameterssrc – Source image.dst – Destination image of the same size and the same number of channels as src .ddepth – Desired depth of the destination image. If it is negative, it will be the same assrc.depth() .kernel – Convolution kernel (or rather a correlation kernel), a single-channel floating point matrix. If you want to apply different kernels to different channels, split the image into separate color planes using split() and process them individually.anchor – Anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor should lie within the kernel. The special default value (-1,-1) means that the anchor is at the kernel center.delta – Optional value added to the filtered pixels before storing them in dst .borderType – Pixel extrapolation method. See borderInterpolate() for details.
blurSmoothes an image using the normalized box filter.C++: void blur(InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), int border- Type=BORDER_DEFAULT )Python: cv2. blur(src, ksize [, dst [, anchor [, borderType ] ] ]) → dstParameterssrc – Source image.dst – Destination image of the same size and type as src .ksize – Smoothing kernel size.anchor – Anchor point. The default value Point(-1,-1) means that the anchor is at the kernel center.borderType – Border mode used to extrapolate pixels outside of the image. The function smoothes an image using the kernel:The call blur(src, dst, ksize, anchor, borderType) is equivalent to boxFilter(src, dst, src.type(), anchor, true, borderType) .See Also:boxFilter(), bilateralFilter(), GaussianBlur(), medianBlur()
GaussianBlurSmoothes an image using a Gaussian filter.C++: void GaussianBlur(InputArray src, OutputArray dst, Size ksize, double sigmaX, double sigmaY=0, int borderType=BORDER_DEFAULT )Python: cv2. GaussianBlur(src, ksize, sigma1 [, dst [, sigma2 [, borderType ] ] ]) → dstParameterssrc – Source image.dst – Destination image of the same size and type as src .ksize – Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero’s and then they are computed from sigma * .sigmaX – Gaussian kernel standard deviation in X direction.sigmaY – Gaussian kernel standard deviation in Y direction. If sigmaY is zero, it is set to be equal to sigmaX . If both sigmas are zeros, they are computed from ksize.width and ksize.height , respectively. See getGaussianKernel() for details. To fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize , sigmaX , and sigmaY .borderType – Pixel extrapolation method. See borderInterpolate() for details. The function convolves the source image with the specified Gaussian kernel. In-place filtering is supported. See Also:sepFilter2D(), filter2D(), blur(), boxFilter(), bilateralFilter(), medianBlur()
medianBlurSmoothes an image using the median filter.C++: void medianBlur(InputArray src, OutputArray dst, int ksize)Python: cv2. medianBlur(src, ksize [, dst ]) → dstParameterssrc – Source 1-, 3-, or 4-channel image. When ksize is 3 or 5, the image depth should beCV _8U , CV_16U , or CV_32F . For larger aperture sizes, it can only be CV _8U .dst – Destination array of the same size and type as src .ksize – Aperture linear size. It must be odd and greater than 1, for example: 3, 5, 7 ...The function smoothes an image using the median filter with the ksize × ksize aperture. Each channel of a multi-channel image is processed independently. In-place operation is supported.See Also:bilateralFilter(), blur(), boxFilter() , GaussianBlur()
bilateralFilterApplies the bilateral filter to an image.C++: void bilateralFilter(InputArray src, OutputArray dst, int d, double sigmaColor, double sigmaS- pace, int borderType=BORDER_DEFAULT )Python: cv2. bilateralFilter(src, d, sigmaColor, sigmaSpace [, dst [, borderType ] ]) → dstParameterssrc – Source 8-bit or floating-point, 1-channel or 3-channel image.dst – Destination image of the same size and type as src .d – Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .sigmaColor – Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.sigmaSpace – Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .The function applies bilateral filtering to the input image, as described in http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html bilateralFilter can reduce unwanted noise very well while keeping edges fairly sharp. However, it is very slow compared to most filters.Sigma values: For simplicity, you can set the 2 sigma values to be the same. If they are small (< 10), the filter will not have much effect, whereas if they are large (> 150), they will have a very strong effect, making the image look “cartoonish”.Filter size: Large filters (d > 5) are very slow, so it is recommended to use d=5 for real-time applications, and perhaps d=9 for offline applications that need heavy noise filtering.
沒有留言:
張貼留言