ROS + OpenCV
視覺節(jié)點(diǎn)測試
先進(jìn)行一些測試。并記錄數(shù)據(jù)。
圓的是節(jié)點(diǎn),方的是話題。
1.robot_camera.launch
@robot:~$ roslaunch robot_vision robot_camera.launch ... logging to /home/jym/.ros/log/bff715b6-9201-11ec-b271-ac8247315e93/roslaunch-robot-8830.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB.started roslaunch server http://192.168.0.110:39005/SUMMARY ========PARAMETERS* /rosdistro: melodic* /rosversion: 1.14.10* /uvc_camera/camera_info_url: file:///home/jym...* /uvc_camera/device: /dev/video0* /uvc_camera/fps: 30* /uvc_camera/frame_id: /base_camera_link* /uvc_camera/image_height: 480* /uvc_camera/image_width: 640* /uvc_camera/io_method: mmap* /uvc_camera/pixel_format: jpegNODES/uvc_camera (uvc_camera/uvc_camera_node)auto-starting new master process[master]: started with pid [8840] ROS_MASTER_URI=http://192.168.0.110:11311setting /run_id to bff715b6-9201-11ec-b271-ac8247315e93 process[rosout-1]: started with pid [8852] started core service [/rosout] process[uvc_camera-2]: started with pid [8856] [ INFO] [1645329679.219295497]: camera calibration URL: file:///home/jym/catkin_ws/src/robot_vision/config/astrapro.yaml opening /dev/video0 pixfmt 0 = 'MJPG' desc = 'Motion-JPEG'discrete: 640x480: 1/30 discrete: 1920x1080: 1/30 discrete: 1280x720: 1/30 discrete: 320x240: 1/30 discrete: 640x480: 1/30 pixfmt 1 = 'YUYV' desc = 'YUYV 4:2:2'discrete: 640x480: 1/30 discrete: 1280x720: 1/10 discrete: 320x240: 1/30 discrete: 640x480: 1/30 int (Brightness, 0, id = 980900): -64 to 64 (1)int (Contrast, 0, id = 980901): 0 to 64 (1)int (Saturation, 0, id = 980902): 0 to 128 (1)int (Hue, 0, id = 980903): -40 to 40 (1)bool (White Balance Temperature, Auto, 0, id = 98090c): 0 to 1 (1)int (Gamma, 0, id = 980910): 72 to 500 (1)int (Gain, 0, id = 980913): 0 to 100 (1)menu (Power Line Frequency, 0, id = 980918): 0 to 2 (1)0: Disabled1: 50 Hz2: 60 Hzint (White Balance Temperature, 16, id = 98091a): 2800 to 6500 (1)int (Sharpness, 0, id = 98091b): 0 to 6 (1)int (Backlight Compensation, 0, id = 98091c): 0 to 2 (1)menu (Exposure, Auto, 0, id = 9a0901): 0 to 3 (1)int (Exposure (Absolute), 16, id = 9a0902): 1 to 5000 (1)bool (Exposure, Auto Priority, 0, id = 9a0903): 0 to 1 (1) @robot:~$ rostopic list /camera_info /image_raw /image_raw/compressed /image_raw/compressed/parameter_descriptions /image_raw/compressed/parameter_updates /image_raw/compressedDepth /image_raw/compressedDepth/parameter_descriptions /image_raw/compressedDepth/parameter_updates /image_raw/theora /image_raw/theora/parameter_descriptions /image_raw/theora/parameter_updates /rosout /rosout_agg @robot:~$ rostopic info /camera_info Type: sensor_msgs/CameraInfoPublishers: * /uvc_camera (http://192.168.0.110:43741/)Subscribers: None@robot:~$ rostopic echo /camera_info header: seq: 8269stamp: secs: 1645330011nsecs: 812194467frame_id: "/base_camera_link" height: 480 width: 640 distortion_model: "plumb_bob" D: [0.155778, -0.139286, 0.023517, 0.023536, 0.0] K: [588.973899, 0.0, 340.56508, 0.0, 588.74004, 276.946637, 0.0, 0.0, 1.0] R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0] P: [616.194702, 0.0, 352.115424, 0.0, 0.0, 618.615906, 285.316735, 0.0, 0.0, 0.0, 1.0, 0.0] binning_x: 0 binning_y: 0 roi: x_offset: 0y_offset: 0height: 0width: 0do_rectify: False --- header: seq: 8270stamp: secs: 1645330011nsecs: 851798254frame_id: "/base_camera_link" height: 480 width: 640 distortion_model: "plumb_bob" D: [0.155778, -0.139286, 0.023517, 0.023536, 0.0] K: [588.973899, 0.0, 340.56508, 0.0, 588.74004, 276.946637, 0.0, 0.0, 1.0] R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0] P: [616.194702, 0.0, 352.115424, 0.0, 0.0, 618.615906, 285.316735, 0.0, 0.0, 0.0, 1.0, 0.0] binning_x: 0 binning_y: 0 roi: x_offset: 0y_offset: 0height: 0width: 0do_rectify: False ---2.face_detection.launch
@robot:~$ roslaunch robot_vision face_recognition.launch ... logging to /home/jym/.ros/log/1a209a74-9204-11ec-9c2b-ac8247315e93/roslaunch-robot-16110.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB.started roslaunch server http://192.168.0.110:35649/SUMMARY ========PARAMETERS* /face_detection/debug_view: False* /face_detection/eyes_cascade_name: /usr/share/opencv...* /face_detection/face_cascade_name: /usr/share/opencv...* /face_detection/queue_size: 3* /face_detection/use_camera_info: False* /face_recognition/data_dir: /home/jym/catk...* /face_recognition/queue_size: 100* /rosdistro: melodic* /rosversion: 1.14.10NODES/face_detection (opencv_apps/face_detection)face_recognition (opencv_apps/face_recognition)ROS_MASTER_URI=http://192.168.0.110:11311process[face_detection-1]: started with pid [16141] process[face_recognition-2]: started with pid [16142] [ INFO] [1645330916.918482639]: Initializing nodelet with 1 worker threads. [ INFO] [1645330917.362862805]: training with 1 images3,face_recognition_train.launch
@robot:~$ roslaunch robot_vision face_recognition_train.launch ... logging to /home/jym/.ros/log/1a209a74-9204-11ec-9c2b-ac8247315e93/roslaunch-robot-16483.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB.started roslaunch server http://192.168.0.110:42635/SUMMARY ========PARAMETERS* /rosdistro: melodic* /rosversion: 1.14.10NODES/face_recognition_trainer (opencv_apps/face_recognition_trainer.py)ROS_MASTER_URI=http://192.168.0.110:11311process[face_recognition_trainer-1]: started with pid [16519] Please input your name and press Enter: joys Your name is joys. Correct? [y/n]: y Please stand at the center of the camera and press Enter: taking picture... One more picture? [y/n]: n sending to trainer... OK. Trained successfully! [face_recognition_trainer-1] process has finished cleanly log file: /home/jym/.ros/log/1a209a74-9204-11ec-9c2b-ac8247315e93/face_recognition_trainer-1*.log all processes on machine have died, roslaunch will exit shutting down processing monitor... ... shutting down processing monitor complete done4.hough_lines.launch
@robot:~$ roslaunch robot_vision hough_lines.launch ... logging to /home/jym/.ros/log/bab0a44c-9205-11ec-b79c-ac8247315e93/roslaunch-robot-19161.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB.started roslaunch server http://192.168.0.110:33599/SUMMARY ========PARAMETERS* /hough_lines/debug_view: False* /hough_lines/hough_type: 0* /hough_lines/maxLineGrap: 10* /hough_lines/minLineLength: 30* /hough_lines/queue_size: 3* /hough_lines/rho: 1.0* /hough_lines/theta: 1.0* /hough_lines/threshold: 125* /hough_lines/use_camera_info: False* /rosdistro: melodic* /rosversion: 1.14.10NODES/hough_lines (opencv_apps/hough_lines)ROS_MASTER_URI=http://192.168.0.110:11311process[hough_lines-1]: started with pid [19199] [ WARN] [1645331407.509654448]: '/hough_lines' subscribes topics only with child subscribers.文件夾分析
config,data,launch,scripts 四個文件夾。
config文件夾
config:存放配置相關(guān)的信息。
比如有一個叫做AR_track_camera.rviz的文件,機(jī)器人端啟動AR標(biāo)簽跟蹤,pc端啟動RVIZ,然后載入這個配置文件(file->open config),就能在RVIZ里面看到二維碼的位置和遠(yuǎn)近距離。
如果不導(dǎo)入這個配置文件,打開RVIZ,默認(rèn)是下面這樣。
現(xiàn)在來看一下此時的RVIZ配置文件的路徑。可知,現(xiàn)在的rviz配置文件是一個default的。
如果載入我們自己寫的配置文件,就會出現(xiàn)下面這樣。也就是說,這個rviz配置文件影響了rviz工具。
config文件夾下面還有l(wèi)ine_hsv.cfg動態(tài)調(diào)參的配置文件。
用于說明可調(diào)的參數(shù)是那些,默認(rèn)值是什么。
#!/usr/bin/env python PACKAGE = "robot_vision" from dynamic_reconfigure.parameter_generator_catkin import *gen = ParameterGenerator()gen.add("h_lower", int_t, 0, "HSV color space h_low", 0, 0, 255) gen.add("s_lower", int_t, 0, "HSV color space s_low", 0, 0, 255) gen.add("v_lower", int_t, 0, "HSV color space v_low", 0, 0, 255) gen.add("h_upper", int_t, 0, "HSV color space h_high", 255, 0, 255) gen.add("s_upper", int_t, 0, "HSV color space s_high", 255, 0, 255) gen.add("v_upper", int_t, 0, "HSV color space v_high", 255, 0, 255)exit(gen.generate(PACKAGE, "robot_vision", "line_hsv"))其他文件:
data:存放存人臉分類器。
CMakeLists.txt:
cmake_minimum_required(VERSION 2.8.3) project(robot_vision)#視覺處理有用到動態(tài)調(diào)參的功能 find_package(catkin REQUIREDdynamic_reconfigure#引用動態(tài)調(diào)參功能包 ) generate_dynamic_reconfigure_options(#引入?yún)?shù)配置文件 # cfg/DynReconf1.cfgconfig/line_hsv.cfg ) 清除之前編譯的內(nèi)容,也可以刪除工作空間下的build和devel文件實(shí)現(xiàn)相同效果 jym@ubuntu:~$ cd ~/catkin_ws jym@ubuntu:~/catkin_ws$ catkin_make clean Base path: /home/jym/catkin_ws Source space: /home/jym/catkin_ws/src Build space: /home/jym/catkin_ws/build Devel space: /home/jym/catkin_ws/devel Install space: /home/jym/catkin_ws/install可以看到在devel里面生成line_hsvConfig.py文件,CMakeLists.txt里面添加的動態(tài)調(diào)參功能包和引入?yún)?shù)配置文件,目的就是在catkin_make時候生成必要的依賴文件。包括python和c++的接口。
[ 2%] Generating dynamic reconfigure files from config/line_hsv.cfg: /home/jym/catkin_ws/devel/include/robot_vision/line_hsvConfig.h /home/jym/catkin_ws/devel/lib/python2.7/dist-packages/robot_vision/cfg/line_hsvConfig.pypackage.xml
<?xml version="1.0"?> <package format="2"><!-- package標(biāo)簽的版本 --><buildtool_depend>catkin</buildtool_depend><build_depend>dynamic_reconfigure</build_depend><!-- 增加動態(tài)調(diào)參的編譯依賴項(xiàng) --><exec_depend>rospy</exec_depend><exec_depend>sensor_msgs</exec_depend><exec_depend>cv_bridge</exec_depend><exec_depend>opencv_apps</exec_depend><!-- format2使用exec_depend標(biāo)簽 --><exec_depend>uvc_camera</exec_depend> </package> jym@ubuntu:~/catkin_ws/src$ rosdep check --from-path robot_vision/ All system dependencies have been satisfied如果缺少: rosdep install --from-path robot_vision/launch文件夾
opencv功能包。
可以修改launch文件里面的default默認(rèn)值,使其符合機(jī)器人實(shí)際發(fā)布的話題名稱、
其中一個launch文件的一部分。
<!-- hough_lines.cpp --><node name="$(arg node_name)" pkg="opencv_apps" type="hough_lines" > @ubuntu:~$ roscd opencv_apps/ @ubuntu:/opt/ros/melodic/share/opencv_apps$ ls cmake launch msg nodelet_plugins.xml package.xml scripts srv test @ubuntu:/opt/ros/melodic/share/opencv_apps$ ls scripts/ face_recognition_trainer.py經(jīng)過上面這個代碼,可知,opencv功能包只有一個人臉訓(xùn)練腳本,因?yàn)閍pt 方式安裝的是二進(jìn)制的可執(zhí)行文件。如果要修改opencv的源碼文件,可以看下面幾個網(wǎng)站:
http://wiki.ros.org/opencv_apps
https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/sobel_derivatives/sobel_derivatives.html#sobel-derivatives
https://github.com/opencv/opencv/tree/2.4
二進(jìn)制包和源碼之間的切換:sudo apt remove ros-melodic-opencv-apps 把a(bǔ)pt安裝的包刪了. 再通過roscd驗(yàn)證是否已刪除 roscd opencv-apps/然后進(jìn)到catkin_ws/src目錄。 git clone https://github.com/ros-perception/opencv_apps.git然后清除之前編譯的內(nèi)容—刪除工作空間下的build和devel文件。然后catkin_make限制編譯的進(jìn)程數(shù):j幾就是幾線程 catkin_make -j1 單線程編譯好處是占用較少資源,缺點(diǎn)是編譯時間長。roscd opencv_apps/ lsar_track.launch:AR標(biāo)簽跟蹤
robot: roslaunch robot_vision robot_camera.launch robot: roslaunch robot_vision ar_track.launch PC :rviz+導(dǎo)入配置文件 pc :rostopic echo /ar_pose_marker 輸出每個二維碼對應(yīng)的序號,以及相對于相機(jī)原點(diǎn)的距離信息和四元數(shù)表示的航向姿態(tài)關(guān)于PC端運(yùn)行ar_track.launch,出現(xiàn)卡頓現(xiàn)象: 關(guān)掉robot ar_track.launch PC: roslaunch robot_vision ar_track.launch PC :rviz+導(dǎo)入配置文件 畫面有明顯的卡頓現(xiàn)象: AR 標(biāo)簽跟蹤需要訂閱 image_raw話題。 PC端運(yùn)行ar_track時先將 image_raw的內(nèi)容傳輸?shù)?PC 端。 image_raw占用帶寬高,所以容易數(shù)據(jù)丟失,卡頓。 <launch><!-- support multi robot --><!-- <arg name="robot_name" default=""/> <include file="$(find robot_vision)/launch/robot_camera.launch" ><arg name="robot_name" value="$(arg robot_name)"/></include> --><arg name="camera_frame" default="base_camera_link"/> <!-- 通過launch文件啟動靜態(tài)坐標(biāo)轉(zhuǎn)換節(jié)點(diǎn) --><!-- 前面是xyz,后面是歐拉角表示的角度變換 --><node pkg="tf" type="static_transform_publisher" name="world_to_cam" args="0 0 0.5 1.57 3.14 1.57 world $(arg camera_frame) 10" /><arg name="marker_size" default="5" /><arg name="max_new_marker_error" default="0.08" /><arg name="max_track_error" default="0.2" /><arg name="cam_image_topic" default="/image_raw" /><arg name="cam_info_topic" default="/camera_info" /><arg name="output_frame" default="/$(arg camera_frame)" /><!-- ar_track_alvar也是第三方發(fā)布的功能包 --><node name="ar_track_alvar" pkg="ar_track_alvar" type="individualMarkersNoKinect" respawn="false" output="screen"><param name="marker_size" type="double" value="$(arg marker_size)" /><param name="max_new_marker_error" type="double" value="$(arg max_new_marker_error)" /><param name="max_track_error" type="double" value="$(arg max_track_error)" /><param name="output_frame" type="string" value="$(arg output_frame)" /><!-- remap 標(biāo)簽對話題做一個重命名 --><!-- 當(dāng)使用的話題名與功能包用到的話題名不符時需要 remap 標(biāo)簽進(jìn)行轉(zhuǎn)換 --><!-- 把它需要的話題名轉(zhuǎn)換成我們真正要發(fā)布的話題名 --><remap from="camera_image" to="$(arg cam_image_topic)" /><remap from="camera_info" to="$(arg cam_info_topic)" /></node><!-- rviz view /--><!--node pkg="rviz" type="rviz" name="rviz" args="-d $(find robot_vision)/config/ar_track_camera.rviz"/--></launch>fake_camera.launch:圖像疊加要用到。
robot: roslaunch robot_vision robot_camera.launch robot: roslaunch robot_vision fake_camera.launch PC: roslaunch robot_vision adding_images.launch pc: 兩個 rqt_image_view,訂閱/image_static 和/adding_images/image/compressed <launch><node pkg="robot_vision" name="fake_camera" type="fake_camera.py" output="screen"><!-- 節(jié)點(diǎn)根據(jù)傳入的文件路徑參數(shù)讀取一個文件,位于data文件夾下的一張圖片 --><param name="image_path" value="$(find robot_vision)/data/jym.jpeg" /></node> </launch>line_follow.launch:機(jī)器視覺循跡用到。
robot: roslaunch robot_vision robot_camera.launch robot: roslaunch robot_vision line_follow.launch PC:rqt_image_view 訂閱/image_raw/compressed 正常圖像 訂閱/result_image/compressed 機(jī)器人定位線的中心點(diǎn) 訂閱/mask_image/compressed 圖像黑白,白色是藍(lán)線的位置 robot:roslaunch base_control base_control.launch 循跡調(diào)整線的顏色: robot: roslaunch robot_vision robot_camera.launch robot: roslaunch robot_vision line_follow.launch test_mode:=true pc:rqt_image_view 訂閱/mask_image/compressed 圖像黑白,白色是藍(lán)線的位置 訂閱/result_image/compressed 紅色準(zhǔn)心對準(zhǔn)圖像中心 test_mode 參數(shù):斷輸出紅色準(zhǔn)心處的 HSV 色彩空間值。 移動機(jī)器人位置,準(zhǔn)心對準(zhǔn)紅色線,信息變成紅色線的 HSV 值。 PC:rosrun rqt_reconfigure rqt_reconfigure 動態(tài)調(diào)參工具 修改hsv值。調(diào)參工具有hsv上限和下限,可以: 將上限嚴(yán)格限制在檢測到的參數(shù)范圍。 將下限適當(dāng)放寬。 以上調(diào)參僅在本次應(yīng)用生效。持續(xù)生效解決方法: robot: roscd robot_vision/launch vim line_follow.launch修改里面hsv的default值。 <launch><arg name="h_lower" default="0"/> <arg name="s_lower" default="125"/> <arg name="v_lower" default="125"/> <arg name="h_upper" default="30"/> <arg name="s_upper" default="255"/> <arg name="v_upper" default="200"/> <arg name="test_mode" default="False"/> <!-- node work in test mode or normal mode --><node pkg="robot_vision" name="linefollow" type="line_detector.py" output="screen"><param name="h_lower" value="$(arg h_lower)"/><param name="s_lower" value="$(arg s_lower)"/><param name="v_lower" value="$(arg v_lower)"/><param name="h_upper" value="$(arg h_upper)"/><param name="s_upper" value="$(arg s_upper)"/><param name="v_upper" value="$(arg v_upper)"/><param name="test_mode" value="$(arg test_mode)"/></node><!-- 將 raw 格式的圖像話題轉(zhuǎn)換成 compressed 的話題以節(jié)省帶寬 --><node name="republish_mask" pkg="image_transport" type="republish" args="raw in:=mask_image compressed out:=mask_image" /><node name="republish_result" pkg="image_transport" type="republish" args="raw in:=result_image compressed out:=result_image" /></launch>robot_camera.launch
<launch><!-- support multi robot --><!--傳入?yún)?shù)--><arg name="robot_name" default=""/> <arg name="base_type" default="$(env BASE_TYPE)" /> <arg name="camera_type" default="$(env CAMERA_TYPE)" /> <arg name="camera_tf" default="false" /> <arg name="base_frame" default="base_footprint" /> <arg name="camera_frame" default="base_camera_link"/> <arg name="device" default="video0"/> <arg name="base_id" value="$(arg robot_name)/$(arg base_frame)"/> <!-- base_link name --><arg name="camera_id" value="$(arg robot_name)/$(arg camera_frame)"/> <!-- camera link name --><!--NanoRobot camera image need flip--><!--使用了兩個 group--><!--根據(jù) robot_name 是否為空判斷使用哪個 group--> <group if="$(eval robot_name == '')"><group if="$(eval base_type == 'NanoRobot')"><node name="uvc_camera" pkg="uvc_camera" type="uvc_camera_node" output="screen" ><param name="device" value="/dev/$(arg device)" /><param name="fps" value="30" /><param name="image_width" value="640" /><param name="image_height" value="480" /><param name="pixel_format" value="jpeg" /><param name="frame_id" value="$(arg camera_id)" /><param name="io_method" value="mmap"/><param name="horizontal_flip" value="true" /><param name="vertical_flip" value="true" /> <param name="camera_info_url" type="string" value="file://$(find robot_vision)/config/$(arg camera_type).yaml"/></node></group><group unless="$(eval base_type == 'NanoRobot')"><node name="uvc_camera" pkg="uvc_camera" type="uvc_camera_node" output="screen" ><param name="device" value="/dev/$(arg device)" /><param name="fps" value="30" /><param name="image_width" value="640" /><param name="image_height" value="480" /><param name="pixel_format" value="jpeg" /><param name="frame_id" value="$(arg camera_id)" /><param name="io_method" value="mmap"/><param name="camera_info_url" type="string" value="file://$(find robot_vision)/config/$(arg camera_type).yaml"/></node></group><!--Pub camera tf info--><group if="$(arg camera_tf)"><group if="$(eval base_type == 'NanoRobot')"><node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"args="0.02225 0.0 0.10 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20"></node></group><group if="$(eval base_type == '4WD')"><node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"args="0.1 0.02 0.25 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20"></node></group><group if="$(eval base_type == '4WD_OMNI')"><node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"args="0.1 0.02 0.25 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20"></node></group><group if="$(eval base_type == 'NanoCar')"><node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"args="0.1024 0.0 0.1 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20"></node></group><group if="$(eval base_type == 'NanoRobot_Pro')"><node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"args="0.0015 0.023 0.140 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20"></node></group><group if="$(eval base_type == 'NanoCar_Pro')"><node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"args="0.140 0.023 0.093 0 0.0 0.0 $(arg base_id) $(arg camera_id) 20"></node></group></group> </group><!--robot_name 非空,加上命名空間執(zhí)行第二個 group--><group unless="$(eval robot_name == '')"><group ns="$(arg robot_name)"><!--NanoRobot camera image need flip--><!--根據(jù)機(jī)器人底盤類型做不同的處理--><group if="$(eval base_type == 'NanoRobot')"><node name="uvc_camera" pkg="uvc_camera" type="uvc_camera_node" output="screen" ><param name="device" value="/dev/$(arg device)" /><param name="fps" value="30" /><param name="image_width" value="640" /><param name="image_height" value="480" /><param name="pixel_format" value="jpeg" /><param name="frame_id" value="$(arg camera_id)" /><param name="io_method" value="mmap"/><param name="horizontal_flip" value="true" /><!--NanoRobot類型的攝像頭反裝,將horizontal_flip設(shè)置為true--><param name="vertical_flip" value="true" /> </node></group><group unless="$(eval base_type == 'NanoRobot')"><node name="uvc_camera" pkg="uvc_camera" type="uvc_camera_node" output="screen" ><param name="device" value="/dev/$(arg device)" /><param name="fps" value="30" /><param name="image_width" value="640" /><param name="image_height" value="480" /><param name="pixel_format" value="jpeg" /><param name="frame_id" value="$(arg camera_id)" /><param name="io_method" value="mmap"/></node></group><!--Pub camera tf info,根據(jù)不同底盤類型做靜態(tài)坐標(biāo)轉(zhuǎn)換--><group if="$(arg camera_tf)"><group if="$(eval base_type == 'NanoRobot')"><node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"args="0.02225 0.0 0.10 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20"></node></group><group if="$(eval base_type == '4WD')"><node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"args="0.1 0.02 0.25 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20"></node></group><group if="$(eval base_type == '4WD_OMNI')"><node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"args="0.1 0.02 0.25 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20"></node></group><group if="$(eval base_type == 'NanoCar')"><node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"args="0.1024 0.0 0.1 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20"></node></group><group if="$(eval base_type == 'NanoRobot_Pro')"><node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"args="0.0015 0.023 0.140 0.0 0.0 0.0 $(arg base_id) $(arg camera_id) 20"></node></group><group if="$(eval base_type == 'NanoCar_Pro')"><node pkg="tf" type="static_transform_publisher" name="base_footprint_to_camera"args="0.140 0.023 0.093 0 0.0 0.0 $(arg base_id) $(arg camera_id) 20"></node></group></group> </group></group></launch>script文件夾
cv_bridge_test:實(shí)現(xiàn) OpenCV中的數(shù)據(jù)和 ROS 相互轉(zhuǎn)換。
robot:roslaunch robot_vision robot_camera.launch pc:rosrun robot_vision cv_bridge_test.py pc:rqt_image_view訂閱:/image_raw/compressed 訂閱:/cv_bridge_image,出現(xiàn)圓點(diǎn)cv_bridge_test 節(jié)點(diǎn)訂閱 image_raw 話題,將 image_raw 話題轉(zhuǎn)換成 OpenCV 數(shù)據(jù),然后在圖像上畫點(diǎn)。最后轉(zhuǎn)換成 ROS 話題發(fā)布。
#!/usr/bin/env python # -*- coding: utf-8 -*-import rospy import cv2 from cv_bridge import CvBridge, CvBridgeError from sensor_msgs.msg import Imageclass image_converter:def __init__(self): # 創(chuàng)建cv_bridge,聲明圖像的發(fā)布者和訂閱者self.image_pub = rospy.Publisher("cv_bridge_image", Image, queue_size=1)self.bridge = CvBridge()#節(jié)點(diǎn)首先訂閱圖像self.image_sub = rospy.Subscriber("/image_raw", Image, self.callback)def callback(self,data):# 使用cv_bridge將ROS的圖像數(shù)據(jù)轉(zhuǎn)換成OpenCV的圖像格式try:#使用 imgmsg_to_cv2 函數(shù)將訂閱到的圖像話題轉(zhuǎn)換為OpenCV識別的圖像格式cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8")except CvBridgeError as e:print e# 在opencv的顯示窗口中繪制一個圓,作為標(biāo)記(rows,cols,channels) = cv_image.shapeif cols > 60 and rows > 60 :cv2.circle(cv_image, (60, 60), 30, (0,0,255), -1)# 顯示Opencv格式的圖像# cv2.imshow("Image window", cv_image)# cv2.waitKey(3)# 再將opencv格式額數(shù)據(jù)轉(zhuǎn)換成ros image格式的數(shù)據(jù)發(fā)布try:#通過cv2_to_imgmsg函數(shù)轉(zhuǎn)換成ros下的話題格式發(fā)布出去img_msg = self.bridge.cv2_to_imgmsg(cv_image, "bgr8")img_msg.header.stamp = rospy.Time.now()self.image_pub.publish(img_msg)except CvBridgeError as e:print eif __name__ == '__main__':try:# 初始化ros節(jié)點(diǎn)rospy.init_node("cv_bridge_test")rospy.loginfo("Starting cv_bridge_test node")image_converter()rospy.spin()except KeyboardInterrupt:print "Shutting down cv_bridge_test node."cv2.destroyAllWindows()fake_detector.py:用于圖像疊加。
#!/usr/bin/env pythonimport rospy import cv2 from std_msgs.msg import String from cv_bridge import CvBridge, CvBridgeError from sensor_msgs.msg import Image class fake_camera:def __init__(self):self.image_path = rospy.get_param('~image_path','jym.jpeg')self.image_pub = rospy.Publisher("/image_static", Image, queue_size=3)self.bridge = CvBridge() self.pub_image()def pub_image(self):self.rate = rospy.Rate(30)# 發(fā)布頻率為30hzprint self.image_pathself.cv_image = cv2.imread(self.image_path,1)#用opencv讀取將要被疊加的圖片rospy.loginfo("Start Publish Fake Camera Image")while not rospy.is_shutdown():img_msg = self.bridge.cv2_to_imgmsg(self.cv_image, "bgr8")#轉(zhuǎn)換成ros格式,定期發(fā)布img_msg.header.stamp = rospy.Time.now()self.image_pub.publish(img_msg)#發(fā)布self.rate.sleep()#rate是設(shè)置的頻率if __name__ == '__main__':try:rospy.init_node("fake_camera",anonymous=False)fake_camera()except rospy.ROSInternalException:passline_detector.py:用于循跡
#!/usr/bin/env pythonimport rospy import cv2 from cv_bridge import CvBridge, CvBridgeError from sensor_msgs.msg import Image import numpy as np from dynamic_reconfigure.server import Server from robot_vision.cfg import line_hsvConfig # from robot_vision.cfgfrom geometry_msgs.msg import Twistclass line_follow:def __init__(self): #define topic publisher and subscriber#訂閱器、發(fā)布器、CvBridge類定義self.bridge = CvBridge()self.image_sub = rospy.Subscriber("/image_raw", Image, self.callback)self.mask_pub = rospy.Publisher("/mask_image", Image, queue_size=1)self.result_pub = rospy.Publisher("/result_image", Image, queue_size=1)self.pub_cmd = rospy.Publisher('cmd_vel', Twist, queue_size=5)self.srv = Server(line_hsvConfig,self.dynamic_reconfigure_callback)# get param from launch file self.test_mode = bool(rospy.get_param('~test_mode',False))self.h_lower = int(rospy.get_param('~h_lower',110))self.s_lower = int(rospy.get_param('~s_lower',50))self.v_lower = int(rospy.get_param('~v_lower',50))self.h_upper = int(rospy.get_param('~h_upper',130))self.s_upper = int(rospy.get_param('~s_upper',255))self.v_upper = int(rospy.get_param('~v_upper',255))#line center point X Axis coordinateself.center_point = 0#通過rqt_reconfigure 工具修改變量后就會調(diào)用下面的回調(diào)函數(shù)def dynamic_reconfigure_callback(self,config,level):#動態(tài)調(diào)參的回調(diào)函數(shù)# update config param,將rqt_reconfigure 工具修改的參數(shù)賦給程序中的變量self.h_lower = config.h_lowerself.s_lower = config.s_lowerself.v_lower = config.v_lowerself.h_upper = config.h_upperself.s_upper = config.s_upperself.v_upper = config.v_upperreturn config#將圖像的顏色,位置,中心點(diǎn)提取出來def callback(self,data):# convert ROS topic to CV image formarttry:cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8")except CvBridgeError as e:print e# conver image color from RGB to HSV hsv_image = cv2.cvtColor(cv_image,cv2.COLOR_RGB2HSV)#set color mask min amd max valueline_lower = np.array([self.h_lower,self.s_lower,self.v_lower])line_upper = np.array([self.h_upper,self.s_upper,self.v_upper])# get mask from colormask = cv2.inRange(hsv_image,line_lower,line_upper)# close operation to fit some little holekernel = np.ones((9,9),np.uint8)mask = cv2.morphologyEx(mask,cv2.MORPH_CLOSE,kernel)# if test mode,output the center point HSV valueres = cv_imageif self.test_mode:cv2.circle(res, (hsv_image.shape[1]/2,hsv_image.shape[0]/2), 5, (0,0,255), 1)cv2.line(res,(hsv_image.shape[1]/2-10, hsv_image.shape[0]/2), (hsv_image.shape[1]/2+10,hsv_image.shape[0]/2), (0,0,255), 1)cv2.line(res,(hsv_image.shape[1]/2, hsv_image.shape[0]/2-10), (hsv_image.shape[1]/2, hsv_image.shape[0]/2+10), (0,0,255), 1)rospy.loginfo("Point HSV Value is %s"%hsv_image[hsv_image.shape[0]/2,hsv_image.shape[1]/2]) else:# in normal mode,add mask to original image# res = cv2.bitwise_and(cv_image,cv_image,mask=mask)for i in range(-60,100,20):#提取圖像中心點(diǎn)point = np.nonzero(mask[mask.shape[0]/2 + i]) if len(point[0]) > 10:self.center_point = int(np.mean(point))cv2.circle(res, (self.center_point,hsv_image.shape[0]/2+i), 5, (0,0,255), 5)breakif self.center_point:self.twist_calculate(hsv_image.shape[1]/2,self.center_point)self.center_point = 0# show CV image in debug mode(need display device)# cv2.imshow("Image window", res)# cv2.imshow("Mask window", mask)# cv2.waitKey(3)# convert CV image to ROS topic and pub try:img_msg = self.bridge.cv2_to_imgmsg(res, encoding="bgr8")img_msg.header.stamp = rospy.Time.now()self.result_pub.publish(img_msg)img_msg = self.bridge.cv2_to_imgmsg(mask, encoding="passthrough")img_msg.header.stamp = rospy.Time.now()self.mask_pub.publish(img_msg)except CvBridgeError as e:print e#根據(jù)圖像的中心的位置通過 twist_calculate 函數(shù)計算需要發(fā)布的速度指令的值def twist_calculate(self,width,center):center = float(center)self.twist = Twist()self.twist.linear.x = 0self.twist.linear.y = 0self.twist.linear.z = 0self.twist.angular.x = 0self.twist.angular.y = 0self.twist.angular.z = 0if center/width > 0.95 and center/width < 1.05:self.twist.linear.x = 0.2else:self.twist.angular.z = ((width - center) / width) / 2.0if abs(self.twist.angular.z) < 0.2:self.twist.linear.x = 0.2 - self.twist.angular.z/2.0else:self.twist.linear.x = 0.1self.pub_cmd.publish(self.twist)if __name__ == '__main__':try:# init ROS node rospy.init_node("line_follow")rospy.loginfo("Starting Line Follow node")line_follow()rospy.spin()except KeyboardInterrupt:print "Shutting down cv_bridge_test node."cv2.destroyAllWindows()總結(jié)
以上是生活随笔為你收集整理的ROS + OpenCV的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: pc个人微型计算机ppt,《IBMPC微
- 下一篇: pta7-3 统计不及格人数_应用统计专