**PCD数据获取:Kinect+OpenNI+PCL对接(代码)
前言:
?????????? PCL使用點云作為數據格式,Kinect可以直接作為三維圖像的數據源產生三維數據,其中的橋梁是OpenNI和PrimeSense。為了方便地使用Kinect的數據,還是把OpenNI獲取的基礎數據格式轉換為點云格式。并且PCD格式的數據操作起來更為直觀。
??????????? 切記:理論自己慢慢看,代碼是最重要的,且測試可行且正確
?
(1):Microsoft Kinect SDK 和 PrimeSense OpenNI 的區別
??????? 原文鏈接:http://blog.csdn.net/wdxzkp/article/details/6608817
注解:作者寫了一系列博文,可以借鑒一下:
????? After playing with both the Microsoft Kinect SDK and the PrimeSense OpenNI SDK here are some of my thoughts,Note that the Microsoft’s SDK version is the Beta version, so things may change when the final one is released)
Microsoft’s Kinect SDK (Beta)pro:?優點
- support for audio?支持音頻
- support for motor/tilt?支持馬達
- full body tracking:?全身追蹤
- does not need a calibration pose?不需要標定姿勢(投降姿勢)
- includes head, hands, feet, clavicles?包括頭,手,腳和鎖骨
- seems to deal better with occluded joints?看起來處理關節閉塞更好些?
- supports multiple sensors?支持多傳感器(多臺Kinect)
- single no-fuss installer?簡化安裝(安裝更容易)
- SDK has events for when a new Video or new Depth frame is available?當新的視頻或深度圖有效時,SDK會有可用事件?
con:?缺點
- licensed for non-commercial use only?非商用(商業需要付費)
- only tracks full body (no mode for hand only tracking)??只能追蹤全身(不包含特定的追蹤模式:例如只追蹤手)
- does not offer alignment of the color&depth image streams to one another yet?
- although there are features to align individual coordinates
- and there are hints that support may come later
- only calculates positions for the joints, not rotations?關節只有坐標數據,沒有旋轉數據?only tracks the full body, no upperbody or hands only mode?只能追蹤全身,不包含特定的追蹤模式:例如只追蹤手或上半身
- seems to consume more CPU power than OpenNI/NITE (not properly benchmarked)??和OpenNI/NITE相比,看起來更消耗CPU(沒有采用適當的基準)
- SDK does not have events for when new user enters frame, leaves frame etc??SDK沒有此類發生事件,例如當一個用戶被偵測到或用戶丟失等等。?
PrimeSense OpenNI/NITEpro:?優點
- license includes commercial use?可以商用(不需要付費)
- includes a framework for hand tracking?包含手部追蹤框架
- includes a framework for hand gesture recognition?包含手勢識別框架
- can automatically align the depth image stream to the color image??可以自動對齊深度圖數據到彩色圖數據
- full body tracking:??全身追蹤?
- also calculates rotations for the joints?包含坐標數據和旋轉數據
- support for hands only mode?支持特殊跟蹤模式:例如:只追蹤手和頭或上半身
- seems to consume less CPU power than Microsoft Kinect SDK’s tracker (not properly benchmarked)?和微軟的SDK相比消耗的CPU更少
- also supports the Primesense and the ASUS WAVI Xtion sensors?支持Primesense和華碩的WAVI Xtion硬件平臺
- supports multiple sensors although setup and enumeration is a bit quirky?支持多傳感器但是需要安裝和枚舉,這一點有點古怪。
- supports Windows (including Vista&XP), Linux and Mac OSX?支持Windows(包括Vista&XP&WIN7),Linux系統和蘋果操作系統(翻者:也支持Android)
- comes with code for full support in Unity3D game engine??自帶的代碼全面支持Unity3D游戲引擎(翻者:也支持Ogre)
- support for record/playback to/from disk?支持數據記錄到硬盤或從硬盤回放數據
- support to stream the raw InfraRed video data?支持紅外數據流
- SDK has events for when new User enters frame, leaves frame etc?SDK有此類發生事件,例如:當一個用戶被偵測到或者用戶丟失。(提供回調函數供開發者使用)?
con:?缺點
- no support for audio?不支持音頻
- no support for motor/tilt (although you can simultaneously use the CL-NUI motor drivers)?不支持馬達(翻者:馬達是微軟的專利,所以primesense公司不想惹微軟)
- full body tracking:??全身追蹤?
- lacks rotations for the head, hands, feet, clavicles?缺乏以下關節:頭,手,腳,和鎖骨
- needs a calibration pose to start tracking (although it can be saved/loaded to/from disk for reuse)?需要一個標定姿勢(投降姿勢)才能開始追蹤骨骼(注意:標定數據是可以存儲和提取的方便重用)
- occluded joints are not estimated?關節閉塞沒有被估算
- supports multiple sensors although setup and enumeration is a bit quirky?支持多感應器但是需要安裝和枚舉,這一點有點古怪。
- three separate installers and a NITE license string (although the process can be automated with my auto driver installer)??需要單獨安裝NITE
- SDK does not have events for when new Video or new Depth frames is available?SDK沒有此類發生事件,例如:當新的視頻或者深度圖數據有效時。(翻者:OpenNI提供了類似功能的函數可使用,雖然不是回調函數,但是也很好用)?
(Personal) conclusion:Microsoft seems to have the edge when working with skeletons and/or audio.
微軟在骨骼識別和音頻方面有優勢。(翻者:本人非常認同,微軟的音頻識別將會在未來的體感游戲里發揮重要的作用!)
OpenNI?seems to be best suited when working on colored pointclouds, on non-Win7 platforms and/or for commercial projects.
OpenNI似乎更適合做一些帶顏色的點云的工作,和在非Win7平臺來開發商業項目。
When working with gestures in specific:?手勢識別?
- If your sensor only sees the upperbody/hands and/or you want an existing framework to start with use OpenNI/NITE.?
如果你想開發基于上半身或手識別的項目,可以使用OpenNI和NITE - When your sensor can see the full body the more stable Microsoft skeleton may be the best to use, however you’ll have to code your own gesture recognitions. (You’d also have to extend OpenNI/NITE for fullbody gestures btw)
全身識別毋庸置疑微軟的SDK是最好的,然而你必須自己編寫你自己的手勢識別代碼。
?
(2):數據獲取借鑒的三篇文章:
透過 OpneNI 讀取 Kinect 深度影像資料
:http://viml.nchc.org.tw/blog/paper_info.php?CLASS_ID=1&SUB_ID=1&PAPER_ID=215
透過 OpneNI 合併 Kinect 深度以及彩色影像資料
http://viml.nchc.org.tw/blog/paper_info.php?CLASS_ID=1&SUB_ID=1&PAPER_ID=216
透過 OpenNI 建立 Kinect 3D Point Cloud:
http://viml.nchc.org.tw/blog/paper_info.php?CLASS_ID=1&SUB_ID=1&PAPER_ID=217
還有一篇文章,比較詳細:
Kinect運用OpenNI產生點云
原文鏈接:http://blog.csdn.net/opensource07/article/details/7804246
(1):作者的例程:利用OpenNI讀取Kinect...
#include <stdlib.h> #include <iostream> #include <string>#include <XnCppWrapper.h>using namespace std;void CheckOpenNIError( XnStatus eResult, string sStatus ) {if( eResult != XN_STATUS_OK )cerr << sStatus << " Error: " << xnGetStatusString( eResult ) << endl; }int main( int argc, char** argv ) {XnStatus eResult = XN_STATUS_OK;// 2. initial contextxn::Context mContext;eResult = mContext.Init();CheckOpenNIError( eResult, "initialize context" );// set map modeXnMapOutputMode mapMode;mapMode.nXRes = 640;mapMode.nYRes = 480;mapMode.nFPS = 30;// 3. create depth generatorxn::DepthGenerator mDepthGenerator;eResult = mDepthGenerator.Create( mContext );CheckOpenNIError( eResult, "Create depth generator" );eResult = mDepthGenerator.SetMapOutputMode( mapMode );// 4. start generate dataeResult = mContext.StartGeneratingAll();// 5. read dataeResult = mContext.WaitAndUpdateAll();if( eResult == XN_STATUS_OK ){// 5. get the depth mapconst XnDepthPixel* pDepthMap = mDepthGenerator.GetDepthMap();// 6. Do something with depth map}// 7. stopmContext.StopGeneratingAll();mContext.Shutdown();return 0; }(2):合并深度和RGB圖像:基于對昌邑程序的修改。
#include <stdlib.h> #include <iostream> #include <string>#include <XnCppWrapper.h>using namespace std;void CheckOpenNIError( XnStatus eResult, string sStatus ) {if( eResult != XN_STATUS_OK )cerr << sStatus << " Error : " << xnGetStatusString( eResult ) << endl; }int main( int argc, char** argv ) {XnStatus eResult = XN_STATUS_OK;// 2. initial contextxn::Context mContext;eResult = mContext.Init();CheckOpenNIError( eResult, "initialize context" );// 3. create depth generatorxn::DepthGenerator mDepthGenerator;eResult = mDepthGenerator.Create( mContext );CheckOpenNIError( eResult, "Create depth generator" );// 4. create image generatorxn::ImageGenerator mImageGenerator;eResult = mImageGenerator.Create( mContext );CheckOpenNIError( eResult, "Create image generator" );// 5. set map modeXnMapOutputMode mapMode;mapMode.nXRes = 640;mapMode.nYRes = 480;mapMode.nFPS = 30;eResult = mDepthGenerator.SetMapOutputMode( mapMode );eResult = mImageGenerator.SetMapOutputMode( mapMode );// 6. correct view portmDepthGenerator.GetAlternativeViewPointCap().SetViewPoint( mImageGenerator );// 7. tart generate dataeResult = mContext.StartGeneratingAll();// 8. read dataeResult = mContext.WaitNoneUpdateAll();if( eResult == XN_STATUS_OK ){// 9a. get the depth mapconst XnDepthPixel* pDepthMap = mDepthGenerator.GetDepthMap();// 9b. get the image mapconst XnUInt8* pImageMap = mImageGenerator.GetImageMap();}// 10. stopmContext.StopGeneratingAll();mContext.Shutdown();return 0; }?
(3):建立3D點云:對上一個程序的修改
?? ? ? 定義一個簡單的結構體:SColorPoint3D:
struct SColorPoint3D {float X;float Y;float Z;float R;float G;float B;SColorPoint3D( XnPoint3D pos, XnRGB24Pixel color ){X = pos.X;Y = pos.Y;Z = pos.Z;R = (float)color.nRed / 255;G = (float)color.nGreen / 255;B = (float)color.nBlue / 255;} };?
六個點值:分別記錄這個點的位置、以及顏色;
建構子的部分:則是傳入 OpenNI 定義的結構的參數:代表位置的 XnPoint3D? 以及代表 RGB 顏色的 XnRGB24Pixel。
?????? 把座標轉換的部分寫成一個函數 GeneratePointCloud(),其內容如下:
void GeneratePointCloud( xn::DepthGenerator& rDepthGen,const XnDepthPixel* pDepth,const XnRGB24Pixel* pImage,vector<SColorPoint3D>& vPointCloud ) {// 1. number of point is the number of 2D image pixelxn::DepthMetaData mDepthMD;rDepthGen.GetMetaData( mDepthMD );unsigned int uPointNum = mDepthMD.FullXRes() * mDepthMD.FullYRes();// 2. build the data structure for convertXnPoint3D* pDepthPointSet = new XnPoint3D[ uPointNum ];unsigned int i, j, idxShift, idx;for( j = 0; j < mDepthMD.FullYRes(); ++j ){idxShift = j * mDepthMD.FullXRes();for( i = 0; i < mDepthMD.FullXRes(); ++i ){idx = idxShift + i;pDepthPointSet[idx].X = i;pDepthPointSet[idx].Y = j;pDepthPointSet[idx].Z = pDepth[idx];}}// 3. un-project points to real worldXnPoint3D* p3DPointSet = new XnPoint3D[ uPointNum ];rDepthGen.ConvertProjectiveToRealWorld( uPointNum, pDepthPointSet, p3DPointSet );delete[] pDepthPointSet;// 4. build point cloudfor( i = 0; i < uPointNum; ++ i ){// skip the depth 0 pointsif( p3DPointSet[i].Z == 0 )continue;vPointCloud.push_back( SColorPoint3D( p3DPointSet[i], pImage[i] ) );}delete[] p3DPointSet; }? ? ?? 函數把 xn::DepthGenerator 以及讀到的深度影像和彩色影像傳進來,用來當作資料來源;
?
? ? ?? 同時也傳入一個 vector<SColorPoint3D>,作為儲存轉換完成後的 3D 點位資料。
? ? ?? 其中,深度影像的格式還是一樣用 XnDepthPixel 的 const 指標,不過在彩色影像的部分,Heresy 則是改用把 RGB 封包好的 XnRGB24Pixel,這樣可以減少一些索引值的計算;而因為這樣修改,之前讀取彩色影像的程式也要
constXnUInt8* pImageMap = mImageGenerator.GetImageMap();
修改為
?? constXnRGB24Pixel* pImageMap = mImageGenerator.GetRGB24ImageMap();
?
?????? 回到主程序的部分,本來讀取資料的程序是:
// 8. read data eResult = mContext.WaitNoneUpdateAll(); if( eResult == XN_STATUS_OK ) {// 9a. get the depth mapconst XnDepthPixel* pDepthMap = mDepthGenerator.GetDepthMap();// 9b. get the image mapconst XnUInt8* pImageMap = mImageGenerator.GetImageMap(); }?????? 前面也提過,Heresy 這邊不打算提及用 OpenGL 顯示的部分,所以這邊為了不停地更新資料,所以改用一個無窮迴圈的形式來不停地更新資料、並進行座標轉換;
?????? 而轉換後的結果,也很簡單地只輸出它的點的數目。
修改后:
// 8. read data vector<SColorPoint3D> vPointCloud; while( true ) { eResult = mContext.WaitNoneUpdateAll(); // 9a. get the depth map const XnDepthPixel* pDepthMap = mDepthGenerator.GetDepthMap(); // 9b. get the image map const XnRGB24Pixel* pImageMap = mImageGenerator.GetRGB24ImageMap(); // 10 generate point cloud vPointCloud.clear(); GeneratePointCloud( mDepthGenerator, pDepthMap, pImageMap, vPointCloud ); cout << "Point number: " << vPointCloud.size() << endl; }?
如果是要用 OpenGL 畫出來的話,基本上就是不要使用無窮迴圈,而是在每次要畫之前,再去讀取 Kinect 的資料、並透過 GeneratePointCloud() 做轉換了~而如果不打算重建多邊形、而是像 Heresy 直接一點一點畫出來的話,結果大概就會像上面的影片一樣了~
若用 OpenGL 畫出來的話,基本上就是不要使用無窮循環,而是在每次要畫之前,再去讀取 Kinect 的資料、並透過 GeneratePointCloud() 做轉換~而如果不打算重建多邊形、而是像 Heresy 直接一點一點畫出來的話,結果大概就會像上面的影片一樣~
后記:
?????? 至此摘抄完畢,編譯了程序,點云顯示之后,測試程序檢測無誤....
總結
以上是生活随笔為你收集整理的**PCD数据获取:Kinect+OpenNI+PCL对接(代码)的全部內容,希望文章能夠幫你解決所遇到的問題。