Kinect+OpenNI学习笔记之12(简单手势所表示的数字的识别)
引自:http://www.cnblogs.com/tornadomeet/archive/2012/11/04/2753185.html
前言
這篇文章是本人玩kinect時(shí)做的一個(gè)小實(shí)驗(yàn),即不采用機(jī)器學(xué)習(xí)等類似AI的方法來(lái)做簡(jiǎn)單的手勢(shì)數(shù)字識(shí)別,當(dāng)然了,該識(shí)別的前提是基于本人前面已提取出手部的博文Robert Walter手部提取代碼的分析的基礎(chǔ)上進(jìn)行的。由于是純數(shù)學(xué)形狀上來(lái)判別手勢(shì),所以只是做了個(gè)簡(jiǎn)單的0~5的數(shù)字識(shí)別系統(tǒng),其手勢(shì)的分割部分效果還不錯(cuò)(因?yàn)槠浜诵拇a是由OpenNI提供的),手勢(shì)數(shù)字識(shí)別時(shí)容易受干擾,效果一般般,畢竟這只是個(gè)簡(jiǎn)單的實(shí)驗(yàn)。
?
實(shí)驗(yàn)基礎(chǔ)
首先來(lái)看下本系統(tǒng)的流程圖,如下所示:
?
其中輪廓的提取,多邊形擬合曲線的求法,凸包集和凹陷集的求法都是采用opencv中自帶的函數(shù)。手勢(shì)數(shù)字的識(shí)別是利用凸包點(diǎn)以及凹陷點(diǎn)和手部中心點(diǎn)的幾何關(guān)系,簡(jiǎn)單的做了下邏輯判別了(可以肯定的是這種方法很爛),具體的做法是先在手部定位出2個(gè)中心點(diǎn)坐標(biāo),這2個(gè)中心點(diǎn)坐標(biāo)之間的距離閾值由程序設(shè)定,其中一個(gè)中心點(diǎn)就是利用OpenNI跟蹤得到的手部位置。有了這2個(gè)中心點(diǎn)的坐標(biāo),在程序中就可以分別計(jì)算出在這2個(gè)中心點(diǎn)坐標(biāo)上的凸凹點(diǎn)的個(gè)數(shù)。當(dāng)然了,這樣做的前提是用人在做手勢(shì)表示數(shù)字的同時(shí)應(yīng)該是將手指的方向朝上(因?yàn)闆](méi)有像機(jī)器學(xué)習(xí)那樣通過(guò)樣本來(lái)訓(xùn)練,所以使用時(shí)條件要苛刻很多)。利用上面求出的4種點(diǎn)的個(gè)數(shù)(另外程序中還設(shè)置了2個(gè)輔助計(jì)算點(diǎn)的個(gè)數(shù),具體見(jiàn)代碼部分)和簡(jiǎn)單的邏輯判斷就可以識(shí)別出數(shù)字0~5了。其它的數(shù)字可以依照具體的邏輯去設(shè)計(jì)(還可以設(shè)計(jì)出多位數(shù)字的識(shí)別),只是數(shù)字越多設(shè)計(jì)起來(lái)越復(fù)雜,因?yàn)橐紤]到它們之間的干擾性,且這種不通用的設(shè)計(jì)方法也沒(méi)有太多的實(shí)際意義。
?
OpenCV知識(shí)點(diǎn)總結(jié)
void convexityDefects(InputArray contour, InputArray convexhull, OutputArray convexityDefects)
這個(gè)在函數(shù)在前面的博文Robert Walter手部提取代碼的分析中已經(jīng)介紹過(guò),當(dāng)時(shí)是這么解釋的:
該函數(shù)的作用是對(duì)輸入的輪廓contour,凸包集合來(lái)檢測(cè)其輪廓的凸型缺陷,一個(gè)凸型缺陷結(jié)構(gòu)體包括4個(gè)元素,缺陷起點(diǎn)坐標(biāo),缺陷終點(diǎn)坐標(biāo),缺陷中離凸包線距離最遠(yuǎn)的點(diǎn)的坐標(biāo),以及此時(shí)最遠(yuǎn)的距離。參數(shù)3即其輸出的凸型缺陷結(jié)構(gòu)體向量。
其凸型缺陷的示意圖如下所示:
?
不過(guò)這里需要重新對(duì)這3個(gè)參數(shù)做個(gè)詳細(xì)的說(shuō)明:
第1個(gè)參數(shù)雖然寫(xiě)的是contour,字面意思是輪廓,但是本人實(shí)驗(yàn)過(guò)很多次,發(fā)現(xiàn)如果該參數(shù)為目標(biāo)通過(guò)輪廓檢測(cè)得到的原始輪廓的話,則程序運(yùn)行到onvexityDefects()函數(shù)時(shí)會(huì)報(bào)內(nèi)存錯(cuò)誤。因此本程序中采用的不是物體原始的輪廓,而是經(jīng)過(guò)多項(xiàng)式曲線擬合后的輪廓,即多項(xiàng)式曲線,這樣程序就會(huì)順利地運(yùn)行得很好。另外由于在手勢(shì)識(shí)別過(guò)程中可能某一幀檢測(cè)出來(lái)的輪廓非常小(由于某種原因),以致于少到只有1個(gè)點(diǎn),這時(shí)候如果程序運(yùn)行到onvexityDefects()函數(shù)時(shí)就會(huì)報(bào)如下的錯(cuò)誤:
查看opencv源碼中對(duì)應(yīng)錯(cuò)誤提示的位置,其源碼如下:
表示在1969行的位置出錯(cuò),也就是CV_Assert( ptnum > 3 );出錯(cuò),說(shuō)明出錯(cuò)時(shí)此處的ptnum <=3;看上面一行代碼ptnum = points.checkVector(2, CV_32S);所以我們需要了解checkVector()函數(shù)的功能,進(jìn)入opencv中關(guān)于checkVector的源碼,如下:
int Mat::checkVector(int _elemChannels, int _depth, bool _requireContinuous) const{return (depth() == _depth || _depth <= 0) &&(isContinuous() || !_requireContinuous) &&((dims == 2 && (((rows == 1 || cols == 1) && channels() == _elemChannels) || (cols == _elemChannels))) ||(dims == 3 && channels() == 1 && size.p[2] == _elemChannels && (size.p[0] == 1 || size.p[1] == 1) &&(isContinuous() || step.p[1] == step.p[2]*size.p[2])))? (int)(total()*channels()/_elemChannels) : -1;}該函數(shù)源碼大概意思就是說(shuō)對(duì)應(yīng)的Mat矩陣如果其深度,連續(xù)性,通道數(shù),行列式滿足一定條件的話就返回Mat元素的個(gè)數(shù)和其通道數(shù)的乘積,否則返回-1;而本文是要求其返回值大于3,有得知此處輸入多邊形曲線(即參數(shù)1)的通道數(shù)為2,所以還需要求其元素的個(gè)數(shù)大于1.5,即大于2才滿足ptnum > 3。簡(jiǎn)單的說(shuō)就是用convexityDefects()函數(shù)來(lái)對(duì)多邊形曲線進(jìn)行凹陷檢測(cè)時(shí),必須要求參數(shù)1曲線本身至少有2個(gè)點(diǎn)(也不知道這樣分析對(duì)不對(duì))。因此本人在本次程序convexityDefects()函數(shù)前加入了if(Mat(approx_poly_curve).checkVector(2,? CV_32S) > 3)來(lái)判斷,只有滿足該if條件,才會(huì)進(jìn)行后面的凹陷檢測(cè)。這樣程序就不會(huì)再出現(xiàn)類似的bug了。
第2個(gè)參數(shù)一般是由opencv中的函數(shù)convexHull()獲得的,一般情況下該參數(shù)里面存的是凸包集合中的點(diǎn)在多項(xiàng)式曲線點(diǎn)中的位置索引,且該參數(shù)以vector的形式存在,因此參數(shù)convexhull中其元素的類型為unsigned int。在本次凹陷點(diǎn)檢測(cè)函數(shù)convexityDefects()里面根據(jù)文檔,要求該參數(shù)為Mat型。因此在使用convexityDefects()的參數(shù)2時(shí),一般將vector直接轉(zhuǎn)換Mat型。
參數(shù)3是一個(gè)含有4個(gè)元素的結(jié)構(gòu)體的集合,如果在c++的版本中,該參數(shù)可以直接用vector<Vec4i>來(lái)代替,Vec4i中的4個(gè)元素分別表示凹陷曲線段的起始坐標(biāo)索引,終點(diǎn)坐標(biāo)索引,離凸包集曲線最遠(yuǎn)點(diǎn)的坐標(biāo)索引以及此時(shí)的最遠(yuǎn)距離值,這4個(gè)值都是整數(shù)。在c版本的opencv中一般不是保存的索引,而是坐標(biāo)值,如下所示:
struct CvConvexityDefect{CvPoint* start; // point of the contour where the defect begins CvPoint* end; // point of the contour where the defect ends CvPoint* depth_point; // the farthest from the convex hull point within the defectfloat depth; // distance between the farthest point and the convex hull };?
C/c++知識(shí)點(diǎn)總結(jié):
std::abs()函數(shù)中的參數(shù)不能為整型,否則編譯的時(shí)候會(huì)報(bào)錯(cuò)參數(shù)不匹配,因此一般情況下可以傳入long型,這樣就不會(huì)報(bào)錯(cuò)了。
?
?
實(shí)驗(yàn)結(jié)果:
這里顯示的是手勢(shì)分割后的效果以及其對(duì)應(yīng)的數(shù)字識(shí)別結(jié)果。
數(shù)字“0”的識(shí)別結(jié)果:
數(shù)字“1”的識(shí)別結(jié)果:
數(shù)字“2”的識(shí)別結(jié)果:
數(shù)字“3”的識(shí)別結(jié)果:
數(shù)字“4”的識(shí)別結(jié)果:
數(shù)字“5”的識(shí)別結(jié)果:
?
?
實(shí)驗(yàn)主要部分代碼及注釋:
本次實(shí)驗(yàn)程序?qū)崿F(xiàn)過(guò)程和上面的系統(tǒng)流程圖類似,大概過(guò)程如下:
1.? 求出手部的掩膜
2.? 求出掩膜的輪廓
3.? 求出輪廓的多變形擬合曲線
4.? 求出多邊形擬合曲線的凸包集,找出凸點(diǎn)
5.? 求出多變形擬合曲線的凹陷集,找出凹點(diǎn)
6.? 利用上面的凸凹點(diǎn)和手部中心點(diǎn)的幾何關(guān)系來(lái)做簡(jiǎn)單的數(shù)字手勢(shì)識(shí)別
copenni類采用前面博文設(shè)計(jì)的,這里給出主函數(shù)代碼部分。
main.cpp:
#include <iostream>#include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include <opencv2/core/core.hpp> #include "copenni.cpp"#include <iostream>#define DEPTH_SCALE_FACTOR 255./4096. #define ROI_HAND_WIDTH 140 #define ROI_HAND_HEIGHT 140 #define MEDIAN_BLUR_K 5 #define XRES 640 #define YRES 480 #define DEPTH_SEGMENT_THRESH 5 #define MAX_HANDS_COLOR 10 #define MAX_HANDS_NUMBER 10 #define HAND_LIKELY_AREA 2000 #define DELTA_POINT_DISTENCE 25 //手部中心點(diǎn)1和中心點(diǎn)2距離的閾值 #define SEGMENT_POINT1_DISTANCE 27 //凸點(diǎn)與手部中心點(diǎn)1遠(yuǎn)近距離的閾值 #define SEGMENT_POINT2_DISTANCE 30 //凸點(diǎn)與手部中心點(diǎn)2遠(yuǎn)近距離的閾值using namespace cv; using namespace xn; using namespace std;int main (int argc, char **argv) {unsigned int convex_number_above_point1 = 0;unsigned int concave_number_above_point1 = 0;unsigned int convex_number_above_point2 = 0;unsigned int concave_number_above_point2 = 0;unsigned int convex_assist_above_point1 = 0;unsigned int convex_assist_above_point2 = 0;unsigned int point_y1 = 0;unsigned int point_y2 = 0;int number_result = -1;bool recognition_flag = false; //開(kāi)始手部數(shù)字識(shí)別的標(biāo)志 vector<Scalar> color_array;//采用默認(rèn)的10種顏色 {color_array.push_back(Scalar(255, 0, 0));color_array.push_back(Scalar(0, 255, 0));color_array.push_back(Scalar(0, 0, 255));color_array.push_back(Scalar(255, 0, 255));color_array.push_back(Scalar(255, 255, 0));color_array.push_back(Scalar(0, 255, 255));color_array.push_back(Scalar(128, 255, 0));color_array.push_back(Scalar(0, 128, 255));color_array.push_back(Scalar(255, 0, 128));color_array.push_back(Scalar(255, 128, 255));}vector<unsigned int> hand_depth(MAX_HANDS_NUMBER, 0);vector<Rect> hands_roi(MAX_HANDS_NUMBER, Rect(XRES/2, YRES/2, ROI_HAND_WIDTH, ROI_HAND_HEIGHT));namedWindow("color image", CV_WINDOW_AUTOSIZE);namedWindow("depth image", CV_WINDOW_AUTOSIZE);namedWindow("hand_segment", CV_WINDOW_AUTOSIZE); //顯示分割出來(lái)的手的區(qū)域namedWindow("handrecognition", CV_WINDOW_AUTOSIZE); //顯示0~5數(shù)字識(shí)別的圖像 COpenNI openni;if(!openni.Initial())return 1;if(!openni.Start())return 1;while(1) {if(!openni.UpdateData()) {return 1;}/*獲取并顯示色彩圖像*/Mat color_image_src(openni.image_metadata_.YRes(), openni.image_metadata_.XRes(),CV_8UC3, (char *)openni.image_metadata_.Data());Mat color_image;cvtColor(color_image_src, color_image, CV_RGB2BGR);Mat hand_segment_mask(color_image.size(), CV_8UC1, Scalar::all(0));for(auto itUser = openni.hand_points_.cbegin(); itUser != openni.hand_points_.cend(); ++itUser) {point_y1 = itUser->second.Y;point_y2 = itUser->second.Y + DELTA_POINT_DISTENCE;circle(color_image, Point(itUser->second.X, itUser->second.Y),5, color_array.at(itUser->first % color_array.size()), 3, 8);/*設(shè)置不同手部的深度*/hand_depth.at(itUser->first % MAX_HANDS_COLOR) = (unsigned int)(itUser->second.Z* DEPTH_SCALE_FACTOR);//itUser->first會(huì)導(dǎo)致程序出現(xiàn)bug/*設(shè)置不同手部的不同感興趣區(qū)域*/hands_roi.at(itUser->first % MAX_HANDS_NUMBER) = Rect(itUser->second.X - ROI_HAND_WIDTH/2, itUser->second.Y - ROI_HAND_HEIGHT/2,ROI_HAND_WIDTH, ROI_HAND_HEIGHT);hands_roi.at(itUser->first % MAX_HANDS_NUMBER).x = itUser->second.X - ROI_HAND_WIDTH/2;hands_roi.at(itUser->first % MAX_HANDS_NUMBER).y = itUser->second.Y - ROI_HAND_HEIGHT/2;hands_roi.at(itUser->first % MAX_HANDS_NUMBER).width = ROI_HAND_WIDTH;hands_roi.at(itUser->first % MAX_HANDS_NUMBER).height = ROI_HAND_HEIGHT;if(hands_roi.at(itUser->first % MAX_HANDS_NUMBER).x <= 0)hands_roi.at(itUser->first % MAX_HANDS_NUMBER).x = 0;if(hands_roi.at(itUser->first % MAX_HANDS_NUMBER).x > XRES)hands_roi.at(itUser->first % MAX_HANDS_NUMBER).x = XRES;if(hands_roi.at(itUser->first % MAX_HANDS_NUMBER).y <= 0)hands_roi.at(itUser->first % MAX_HANDS_NUMBER).y = 0;if(hands_roi.at(itUser->first % MAX_HANDS_NUMBER).y > YRES)hands_roi.at(itUser->first % MAX_HANDS_NUMBER).y = YRES;}imshow("color image", color_image);/*獲取并顯示深度圖像*/Mat depth_image_src(openni.depth_metadata_.YRes(), openni.depth_metadata_.XRes(),CV_16UC1, (char *)openni.depth_metadata_.Data());//因?yàn)閗inect獲取到的深度圖像實(shí)際上是無(wú)符號(hào)的16位數(shù)據(jù) Mat depth_image;depth_image_src.convertTo(depth_image, CV_8U, DEPTH_SCALE_FACTOR);imshow("depth image", depth_image);//取出手的mask部分//不管原圖像時(shí)多少通道的,mask矩陣聲明為單通道就okfor(auto itUser = openni.hand_points_.cbegin(); itUser != openni.hand_points_.cend(); ++itUser) {for(int i = hands_roi.at(itUser->first % MAX_HANDS_NUMBER).x; i < std::min(hands_roi.at(itUser->first % MAX_HANDS_NUMBER).x+hands_roi.at(itUser->first % MAX_HANDS_NUMBER).width, XRES); i++)for(int j = hands_roi.at(itUser->first % MAX_HANDS_NUMBER).y; j < std::min(hands_roi.at(itUser->first % MAX_HANDS_NUMBER).y+hands_roi.at(itUser->first % MAX_HANDS_NUMBER).height, YRES); j++) {hand_segment_mask.at<unsigned char>(j, i) = ((hand_depth.at(itUser->first % MAX_HANDS_NUMBER)-DEPTH_SEGMENT_THRESH) < depth_image.at<unsigned char>(j, i))& ((hand_depth.at(itUser->first % MAX_HANDS_NUMBER)+DEPTH_SEGMENT_THRESH) > depth_image.at<unsigned char>(j,i));}}medianBlur(hand_segment_mask, hand_segment_mask, MEDIAN_BLUR_K);Mat hand_segment(color_image.size(), CV_8UC3);color_image.copyTo(hand_segment, hand_segment_mask);/*對(duì)mask圖像進(jìn)行輪廓提取,并在手勢(shì)識(shí)別圖像中畫(huà)出來(lái)*/std::vector< std::vector<Point> > contours;findContours(hand_segment_mask, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);//找出mask圖像的輪廓Mat hand_recognition_image = Mat::zeros(color_image.rows, color_image.cols, CV_8UC3);for(int i = 0; i < contours.size(); i++) { //只有在檢測(cè)到輪廓時(shí)才會(huì)去求它的多邊形,凸包集,凹陷集recognition_flag = true;/*找出輪廓圖像多邊形擬合曲線*/Mat contour_mat = Mat(contours[i]);if(contourArea(contour_mat) > HAND_LIKELY_AREA) { //比較有可能像手的區(qū)域std::vector<Point> approx_poly_curve;approxPolyDP(contour_mat, approx_poly_curve, 10, true);//找出輪廓的多邊形擬合曲線std::vector< std::vector<Point> > approx_poly_curve_debug;approx_poly_curve_debug.push_back(approx_poly_curve);drawContours(hand_recognition_image, contours, i, Scalar(255, 0, 0), 1, 8); //畫(huà)出輪廓// drawContours(hand_recognition_image, approx_poly_curve_debug, 0, Scalar(256, 128, 128), 1, 8); //畫(huà)出多邊形擬合曲線/*對(duì)求出的多邊形擬合曲線求出其凸包集*/vector<int> hull;convexHull(Mat(approx_poly_curve), hull, true);for(int i = 0; i < hull.size(); i++) {circle(hand_recognition_image, approx_poly_curve[hull[i]], 2, Scalar(0, 255, 0), 2, 8);/*統(tǒng)計(jì)在中心點(diǎn)1以上凸點(diǎn)的個(gè)數(shù)*/if(approx_poly_curve[hull[i]].y <= point_y1) {/*統(tǒng)計(jì)凸點(diǎn)與中心點(diǎn)1的y軸距離*/long dis_point1 = abs(long(point_y1 - approx_poly_curve[hull[i]].y));int dis1 = point_y1 - approx_poly_curve[hull[i]].y;if(dis_point1 > SEGMENT_POINT1_DISTANCE && dis1 >= 0) {convex_assist_above_point1++;}convex_number_above_point1++;}/*統(tǒng)計(jì)在中心點(diǎn)2以上凸點(diǎn)的個(gè)數(shù)*/if(approx_poly_curve[hull[i]].y <= point_y2) {/*統(tǒng)計(jì)凸點(diǎn)與中心點(diǎn)1的y軸距離*/long dis_point2 = abs(long(point_y2 - approx_poly_curve[hull[i]].y));int dis2 = point_y2 - approx_poly_curve[hull[i]].y;if(dis_point2 > SEGMENT_POINT2_DISTANCE && dis2 >= 0) {convex_assist_above_point2++;}convex_number_above_point2++;}}// /*對(duì)求出的多邊形擬合曲線求出凹陷集*/std::vector<Vec4i> convexity_defects;if(Mat(approx_poly_curve).checkVector(2, CV_32S) > 3)convexityDefects(approx_poly_curve, Mat(hull), convexity_defects);for(int i = 0; i < convexity_defects.size(); i++) {circle(hand_recognition_image, approx_poly_curve[convexity_defects[i][2]] , 2, Scalar(0, 0, 255), 2, 8);/*統(tǒng)計(jì)在中心點(diǎn)1以上凹陷點(diǎn)的個(gè)數(shù)*/if(approx_poly_curve[convexity_defects[i][2]].y <= point_y1)concave_number_above_point1++;/*統(tǒng)計(jì)在中心點(diǎn)2以上凹陷點(diǎn)的個(gè)數(shù)*/if(approx_poly_curve[convexity_defects[i][2]].y <= point_y2)concave_number_above_point2++;}}}/**畫(huà)出手勢(shì)的中心點(diǎn)**/for(auto itUser = openni.hand_points_.cbegin(); itUser != openni.hand_points_.cend(); ++itUser) {circle(hand_recognition_image, Point(itUser->second.X, itUser->second.Y), 3, Scalar(0, 255, 255), 3, 8);circle(hand_recognition_image, Point(itUser->second.X, itUser->second.Y + 25), 3, Scalar(255, 0, 255), 3, 8);}/*手勢(shì)數(shù)字0~5的識(shí)別*///"0"的識(shí)別if((convex_assist_above_point1 ==0 && convex_number_above_point2 >= 2 && convex_number_above_point2 <= 3 &&concave_number_above_point2 <= 1 && concave_number_above_point1 <= 1) || (concave_number_above_point1 ==0|| concave_number_above_point2 == 0) && recognition_flag == true)number_result = 0;//"1"的識(shí)別if(convex_assist_above_point1 ==1 && convex_number_above_point1 >=1 && convex_number_above_point1 <=2 &&convex_number_above_point2 >=2 && convex_assist_above_point2 == 1)number_result = 1;//"2"的識(shí)別if(convex_number_above_point1 == 2 && concave_number_above_point1 == 1 && convex_assist_above_point2 == 2/*convex_assist_above_point1 <=1*/ && concave_number_above_point2 == 1)number_result = 2;//"3"的識(shí)別if(convex_number_above_point1 == 3 && concave_number_above_point1 <= 3 &&concave_number_above_point1 >=1 && convex_number_above_point2 >= 3 && convex_number_above_point2 <= 4 &&convex_assist_above_point2 == 3)number_result = 3;//"4"的識(shí)別if(convex_number_above_point1 == 4 && concave_number_above_point1 <=3 && concave_number_above_point1 >=2 &&convex_number_above_point2 == 4)number_result = 4;//"5"的識(shí)別if(convex_number_above_point1 >=4 && convex_number_above_point2 == 5 && concave_number_above_point2 >= 3 &&convex_number_above_point2 >= 4)number_result = 5;if(number_result !=0 && number_result != 1 && number_result != 2 && number_result != 3 && number_result != 4 && number_result != 5)number_result == -1;/*在手勢(shì)識(shí)別圖上顯示匹配的數(shù)字*/std::stringstream number_str;number_str << number_result;putText(hand_recognition_image, "Match: ", Point(0, 60), 4, 1, Scalar(0, 255, 0), 2, 0 );if(number_result == -1)putText(hand_recognition_image, " ", Point(120, 60), 4, 2, Scalar(255, 0 ,0), 2, 0);elseputText(hand_recognition_image, number_str.str(), Point(150, 60), 4, 2, Scalar(255, 0 ,0), 2, 0);imshow("handrecognition", hand_recognition_image);imshow("hand_segment", hand_segment);/*一個(gè)循環(huán)中對(duì)有些變量進(jìn)行初始化操作*/convex_number_above_point1 = 0;convex_number_above_point2 = 0;concave_number_above_point1 = 0;concave_number_above_point2 = 0;convex_assist_above_point1 = 0;convex_assist_above_point2 = 0;number_result = -1;recognition_flag = false;number_str.clear();waitKey(20);}}?
copenni.h:
#ifndef COPENNI_H #define COPENNI_H#include <XnCppWrapper.h> #include <XnCyclicStackT.h> #include <XnHashT.h> #include <XnListT.h> #include <iostream> #include <map>using namespace xn; using namespace std;class COpenNI { public:COpenNI();~COpenNI();/*OpenNI的內(nèi)部初始化,屬性設(shè)置*/bool Initial();/*啟動(dòng)OpenNI讀取Kinect數(shù)據(jù)*/bool Start();/*更新OpenNI讀取到的數(shù)據(jù)*/bool UpdateData();/*得到色彩圖像的node*/ImageGenerator& getImageGenerator();/*得到深度圖像的node*/DepthGenerator& getDepthGenerator();/*得到人體的node*/UserGenerator& getUserGenerator();/*得到手勢(shì)姿勢(shì)的node*/GestureGenerator& getGestureGenerator();/*得到手部的node*/HandsGenerator& getHandGenerator();DepthMetaData depth_metadata_; //返回深度圖像數(shù)據(jù)ImageMetaData image_metadata_; //返回彩色圖像數(shù)據(jù)std::map<XnUserID, XnPoint3D> hand_points_; //為了存儲(chǔ)不同手的實(shí)時(shí)點(diǎn)而設(shè)置的std::map< XnUserID, vector<XnPoint3D> > hands_track_points_; //為了繪畫(huà)后面不同手部的跟蹤軌跡而設(shè)定的private:/*該函數(shù)返回真代表出現(xiàn)了錯(cuò)誤,返回假代表正確*/bool CheckError(const char* error);/*表示有人體進(jìn)入的回調(diào)函數(shù)*/static void XN_CALLBACK_TYPE CBNewUser(UserGenerator &generator, XnUserID user, void *p_cookie);/*表示骨骼校正完成的回調(diào)函數(shù)*/static void XN_CALLBACK_TYPE CBCalibrationComplete(SkeletonCapability &skeleton,XnUserID user, XnCalibrationStatus calibration_error, void *p_cookie);/*表示某個(gè)手勢(shì)動(dòng)作已經(jīng)完成檢測(cè)的回調(diào)函數(shù)*/static void XN_CALLBACK_TYPE CBGestureRecognized(xn::GestureGenerator &generator, const XnChar *strGesture,const XnPoint3D *pIDPosition, const XnPoint3D *pEndPosition,void *pCookie);/*表示檢測(cè)到某個(gè)手勢(shì)開(kāi)始的回調(diào)函數(shù)*/static void XN_CALLBACK_TYPE CBGestureProgress(xn::GestureGenerator &generator, const XnChar *strGesture,const XnPoint3D *pPosition, XnFloat fProgress, void *pCookie);/*手部開(kāi)始建立的回調(diào)函數(shù)*/static void XN_CALLBACK_TYPE HandCreate(HandsGenerator& rHands, XnUserID xUID, const XnPoint3D* pPosition,XnFloat fTime, void* pCookie);/*手部開(kāi)始更新的回調(diào)函數(shù)*/static void XN_CALLBACK_TYPE HandUpdate(HandsGenerator& rHands, XnUserID xUID, const XnPoint3D* pPosition, XnFloat fTime,void* pCookie);/*手部銷毀的回調(diào)函數(shù)*/static void XN_CALLBACK_TYPE HandDestroy(HandsGenerator& rHands, XnUserID xUID, XnFloat fTime, void* pCookie);XnStatus status_;Context context_;XnMapOutputMode xmode_;UserGenerator user_generator_;ImageGenerator image_generator_;DepthGenerator depth_generator_;GestureGenerator gesture_generator_;HandsGenerator hand_generator_; };#endif // COPENNI_H?
copenni.cpp:
#include "copenni.h" #include <XnCppWrapper.h> #include <iostream> #include <map>using namespace xn; using namespace std;COpenNI::COpenNI() { }COpenNI::~COpenNI() { }bool COpenNI::Initial() {status_ = context_.Init();if(CheckError("Context initial failed!")) {return false;}context_.SetGlobalMirror(true);//設(shè)置鏡像xmode_.nXRes = 640;xmode_.nYRes = 480;xmode_.nFPS = 30;//產(chǎn)生顏色nodestatus_ = image_generator_.Create(context_);if(CheckError("Create image generator error!")) {return false;}//設(shè)置顏色圖片輸出模式status_ = image_generator_.SetMapOutputMode(xmode_);if(CheckError("SetMapOutputMdoe error!")) {return false;}//產(chǎn)生深度nodestatus_ = depth_generator_.Create(context_);if(CheckError("Create depth generator error!")) {return false;}//設(shè)置深度圖片輸出模式status_ = depth_generator_.SetMapOutputMode(xmode_);if(CheckError("SetMapOutputMdoe error!")) {return false;}//產(chǎn)生手勢(shì)nodestatus_ = gesture_generator_.Create(context_);if(CheckError("Create gesture generator error!")) {return false;}/*添加手勢(shì)識(shí)別的種類*/gesture_generator_.AddGesture("Wave", NULL);gesture_generator_.AddGesture("click", NULL);gesture_generator_.AddGesture("RaiseHand", NULL);gesture_generator_.AddGesture("MovingHand", NULL);//產(chǎn)生手部的nodestatus_ = hand_generator_.Create(context_);if(CheckError("Create hand generaotr error!")) {return false;}//產(chǎn)生人體nodestatus_ = user_generator_.Create(context_);if(CheckError("Create gesturen generator error!")) {return false;}//視角校正status_ = depth_generator_.GetAlternativeViewPointCap().SetViewPoint(image_generator_);if(CheckError("Can't set the alternative view point on depth generator!")) {return false;}//設(shè)置與手勢(shì)有關(guān)的回調(diào)函數(shù) XnCallbackHandle gesture_cb;gesture_generator_.RegisterGestureCallbacks(CBGestureRecognized, CBGestureProgress, this, gesture_cb);//設(shè)置于手部有關(guān)的回調(diào)函數(shù) XnCallbackHandle hands_cb;hand_generator_.RegisterHandCallbacks(HandCreate, HandUpdate, HandDestroy, this, hands_cb);//設(shè)置有人進(jìn)入視野的回調(diào)函數(shù) XnCallbackHandle new_user_handle;user_generator_.RegisterUserCallbacks(CBNewUser, NULL, NULL, new_user_handle);user_generator_.GetSkeletonCap().SetSkeletonProfile(XN_SKEL_PROFILE_ALL);//設(shè)定使用所有關(guān)節(jié)(共15個(gè))//設(shè)置骨骼校正完成的回調(diào)函數(shù) XnCallbackHandle calibration_complete;user_generator_.GetSkeletonCap().RegisterToCalibrationComplete(CBCalibrationComplete, this, calibration_complete);return true; }bool COpenNI::Start() {status_ = context_.StartGeneratingAll();if(CheckError("Start generating error!")) {return false;}return true; }bool COpenNI::UpdateData() {status_ = context_.WaitNoneUpdateAll();if(CheckError("Update date error!")) {return false;}//獲取數(shù)據(jù) image_generator_.GetMetaData(image_metadata_);depth_generator_.GetMetaData(depth_metadata_);return true; }ImageGenerator &COpenNI::getImageGenerator() {return image_generator_; }DepthGenerator &COpenNI::getDepthGenerator() {return depth_generator_; }UserGenerator &COpenNI::getUserGenerator() {return user_generator_; }GestureGenerator &COpenNI::getGestureGenerator() {return gesture_generator_; }HandsGenerator &COpenNI::getHandGenerator() {return hand_generator_; }bool COpenNI::CheckError(const char *error) {if(status_ != XN_STATUS_OK) {cerr << error << ": " << xnGetStatusString( status_ ) << endl;return true;}return false; }void COpenNI::CBNewUser(UserGenerator &generator, XnUserID user, void *p_cookie) {//得到skeleton的capability,并調(diào)用RequestCalibration函數(shù)設(shè)置對(duì)新檢測(cè)到的人進(jìn)行骨骼校正generator.GetSkeletonCap().RequestCalibration(user, true); }void COpenNI::CBCalibrationComplete(SkeletonCapability &skeleton, XnUserID user, XnCalibrationStatus calibration_error, void *p_cookie) {if(calibration_error == XN_CALIBRATION_STATUS_OK) {skeleton.StartTracking(user);//骨骼校正完成后就開(kāi)始進(jìn)行人體跟蹤了 }else {UserGenerator *p_user = (UserGenerator*)p_cookie;skeleton.RequestCalibration(user, true);//骨骼校正失敗時(shí)重新設(shè)置對(duì)人體骨骼繼續(xù)進(jìn)行校正 } }void COpenNI::CBGestureRecognized(GestureGenerator &generator, const XnChar *strGesture, const XnPoint3D *pIDPosition, const XnPoint3D *pEndPosition, void *pCookie) {COpenNI *openni = (COpenNI*)pCookie;openni->hand_generator_.StartTracking(*pEndPosition); }void COpenNI::CBGestureProgress(GestureGenerator &generator, const XnChar *strGesture, const XnPoint3D *pPosition, XnFloat fProgress, void *pCookie) { }void COpenNI::HandCreate(HandsGenerator &rHands, XnUserID xUID, const XnPoint3D *pPosition, XnFloat fTime, void *pCookie) {COpenNI *openni = (COpenNI*)pCookie;XnPoint3D project_pos;openni->depth_generator_.ConvertRealWorldToProjective(1, pPosition, &project_pos);pair<XnUserID, XnPoint3D> hand_point_pair(xUID, XnPoint3D());//在進(jìn)行pair類型的定義時(shí),可以將第2個(gè)設(shè)置為空hand_point_pair.second = project_pos;openni->hand_points_.insert(hand_point_pair);//將檢測(cè)到的手部存入map類型的hand_points_中。 pair<XnUserID, vector<XnPoint3D>> hand_track_point(xUID, vector<XnPoint3D>());hand_track_point.second.push_back(project_pos);openni->hands_track_points_.insert(hand_track_point); }void COpenNI::HandUpdate(HandsGenerator &rHands, XnUserID xUID, const XnPoint3D *pPosition, XnFloat fTime, void *pCookie) {COpenNI *openni = (COpenNI*)pCookie;XnPoint3D project_pos;openni->depth_generator_.ConvertRealWorldToProjective(1, pPosition, &project_pos);openni->hand_points_.find(xUID)->second = project_pos;openni->hands_track_points_.find(xUID)->second.push_back(project_pos); }void COpenNI::HandDestroy(HandsGenerator &rHands, XnUserID xUID, XnFloat fTime, void *pCookie) {COpenNI *openni = (COpenNI*)pCookie;openni->hand_points_.erase(openni->hand_points_.find(xUID));openni->hands_track_points_.erase(openni->hands_track_points_.find(xUID )); }?
?
?
實(shí)驗(yàn)總結(jié):
由本次實(shí)驗(yàn)的操作過(guò)程可以看出,識(shí)別效果抗干擾能力比較差差。因此后續(xù)的工作是建立一個(gè)手勢(shì)識(shí)別的數(shù)據(jù)庫(kù),尋找一個(gè)好的手部特征向量,和一個(gè)好的分類器。這有可能將是本人研究生畢業(yè)論文的研究方向,加油!
?
?
參考文獻(xiàn):
???? Robert Walter手部提取代碼的分析
???? 不需要骨骼跟蹤的人體多個(gè)手部分割
???? OpenNI驅(qū)動(dòng)kinect手勢(shì)相關(guān)的類的設(shè)計(jì)
?
?
?
作者:tornadomeet出處:http://www.cnblogs.com/tornadomeet歡迎轉(zhuǎn)載或分享,但請(qǐng)務(wù)必聲明文章出處。轉(zhuǎn)載于:https://www.cnblogs.com/xjzhao-hhu/archive/2012/12/25/2833027.html
總結(jié)
以上是生活随笔為你收集整理的Kinect+OpenNI学习笔记之12(简单手势所表示的数字的识别)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: VS2008 error RC2170:
- 下一篇: asp.net学习笔记·将数据库中的数据