Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 844 articles
Browse latest View live

About using imu_tools filters

$
0
0
Hi. I have a simple question about the usage of the filters included in `imu_tools` (`imu_complementary_filter` & `imu_filter_madgwick`). So far, I have succeeded in sending my IMU readings to ROS from an Arduino (at a rate of around 100Hz, publishing Accelerometer, Gyro and Magnetometer to ROS). Then, I created a node that takes that data and builds two messages ([`sensor_msgs::Imu`](http://docs.ros.org/kinetic/api/sensor_msgs/html/msg/Imu.html) and [`sensor_msgs::MagneticField`](http://docs.ros.org/kinetic/api/sensor_msgs/html/msg/MagneticField.html) respectively). I put the accelerometer values in the `linear_acceleration` field, the gyro in the `angular_velocity` field and the magnetometer in the `magnetic_field`. Additionally, I fill the covariance_matrices with 0, as I have no information about the covariances. Finally, on the orientation field, as my IMU does not produce an orientation estimate, I don't modify it and set `orientation_covariance[0]=-1` , as suggested in the [message definition](http://docs.ros.org/kinetic/api/sensor_msgs/html/msg/Imu.html). However, when I plot the results the orientation values seem to jump a lot between 0 and 1, even with the IMU resting on the desk. I also tested the rviz plugin, but the box moves crazily, and I don't know what else to do. Any idea on what am I doing wrong? I assumed the filters would take the available data fuse it and estimate the orientation quaternion. Am I right? I tried with both filters, and couldn't get the IMU to work/display properly. Am I suppose to provide an estimation of the orientation to the filters? I'm running Ubuntu 16.04 with ROS kinetic, and the IMU I'm using is a I2C LSM9DS0 from Adafruit.

tf tree details

$
0
0
After running gmapping from intel.bag, while generating the map, if I run rosrun tf view_frames, it gives me the frames.pdf. In this case, the transform is map->odom->base_link and odom->laser,the broadcasting node for map->odom is /slam_gmapping, the broadcasting node for odom->laser is /play_1479118047019813325 node, the broadcasting node for odom->base_link is /static_transform_publisher_1479118028670611586 node Please explain the tree.

quaternion_from_euler, pan/tilt, which order?

$
0
0
Hey everybody, before I turn my brain into a slimy mess it's probably easier to admit I'm failing and ask... I have a camera pan/tilt system whose pose I'd like to broadcast through `tf`. The order of the axes is: first pan (left/right/yaw), then, in the pan-frame, tilt (up/down/pitch). How do I translate this into a quaternion? Is using `quaternion_from_euler` the best way to do this? And if it is, what order do I have to put pan and tilt angles in to get the corresponding correct quaternion returned? There are too many permutations to try and I'm really struggling to figure it out through pure thinking. Thanks for your help! //edit: oh, just to mention this: the axes of the tf frame should correspond to the image processing requirements. That is: the frame x- and y-axes should match the image x- and y-axes and the z-axis should point away from the camera, into the image. Or in other words, if I looked through the camera, the x-axis should go to the right, the y-axis down and the z-axis pointing away from me. //edit2: z-axis direction wrong (right-hand convention), sorry... corrected! //edot3: dammit, it was correct in the first place... corrected again... I'm so confused!

tf timing issue with point cloud filtering

$
0
0
Hi everyone, I'm trying to filter a point cloud by means of the available `passthrough` filter nodelet. It works fine so far as long as I filter the **untransformed** point cloud in the frame it is originally published in. I'd like though to filter the point cloud in the world frame. For this purpose there's the `~input_frame` parameter which, I understand, tells the `passthrough` filter to first transform all point coordinates into the set frame. The problem is that the transformation through `tf` complains about that it is unable to be performed because the time stamp needs to be extrapolated into the past. The delta between the correspoding stamps is only about a fraction of a second, say, some hundred milliseconds. Still it does not work. I'm not sure what's wrong here and would be happy if someone could point me in the right direction. Thanks! It might be related to the fact that I create the point cloud by means of stereo reconstruction using `elas_ros`. Due to the resolution of the camera images (1920x1080) it takes `elas_ros` some time to generate the point cloud (about 5 - 10 seconds). But I'm not sure.

Do the /odom topic and odom->base_link tf provide duplicate information?

$
0
0
I'm making a differential-drive rover with wheel encoders and planar LIDAR, but at the moment no other sensors. I understand why it's useful to have [separate `map` and `odom` frames](http://answers.ros.org/question/237295/confused-about-coordinate-frames-can-someone-please-explain/)--if my wheel odometry was perfect, the transform between the two would be zero, but it's not, so mapping (gmapping at the moment) is needed to find ithe correcting transform. I've implemented a publisher of both `nav_msgs.msg.Odometry` as topic `/odom` and `tf.TransformBroadcaster` as a transform between `odom` and `base_link`, according to [this tutorial](http://library.isr.ist.utl.pt/docs/roswiki/navigation(2f)Tutorials(2f)RobotSetup(2f)Odom.html). However, it doesn't seem that the `odom -> base_link` transformation provides any information that isn't already present in my `/odom` topic. Are there other cases in which these would be different, and I'm just not very imaginative? Is the separate existence of the tf just for the (perhaps significant) convenience of having a homogeneous way to transform between `odom` and `map` frames? In that case, maybe it would be possible not to publish the `/odom` topic, since the only extra thing it has is velocities, which I just approximate as the difference from the last message over time increment. FWIW, my code is [here](https://github.com/tsbertalan/gunnar), and in particular **[this script](https://github.com/tsbertalan/gunnar/blob/master/scripts/hardwareDriver) contains the publisher of both the `/odom` topic and the broadcaster of the `odom -> base_link` transformation**. I might later solicit comments on the larger project, but at the moment it's a bit disorganized because of a relatively recent jettisoning of Arduino as a bridge to hardware.

tf : how to use latest transform instead of waiting ?

$
0
0
When calling "pcl_ros::transformPointCloud" I get the error: [ERROR] [1480244411.676958769, 1224.030000000]: Lookup would require extrapolation into the future. Requested time 1224.022000000 but the latest data is at time 1224.021000000, when looking up transform from frame [base_laser_link] to frame [odom] I know this could be solved by calling "waitForTransform", but in this specific case speed is more important than precision. So I would like to use the latest available transform, even if a little off, rather that wait for the more suitable one. I.e. in the example above it would use the tf at 1225.021. Any way to do this ?

Unable to echo rostopic on VMware Workstation 12

$
0
0
Hi! I am running Ubuntu 12.04 with ROS Hydro on my workstation through VMware Workstation 12. This is connected to the robot with ROS Hydro as well. I have been successful in running talker/listener examples between my workstation and robot, have been successful in joystick teleop control from my workstation to the robot. However, I have been unable to view the USB camera. My run code is `rosrun image_view image_view image:=/camera/image_color compressed`. A white box just pops out. Doing `lsusb`shows the device Bus 002 Device 005: ID 1e10:2004 Point Grey Research, Inc. Sony 1.3MP 1/3" ICX445 IIDC video camera [Chameleon] Bus 005 Device 002: ID 2341:8036 Bus 007 Device 002: ID 1546:01a6 U-Blox AG Bus 007 Device 003: ID 0403:6001 Future Technology Devices International, Ltd FT232 USB-Serial (UART) IC Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub However, I have been unable to get any response from my workstation with `rostopic echo /tf`, despite `/tf` showing up amongst a long list with `rostopic list` I have tried changing the IP addresses and Hostname setup to rectify the issue, but have been unsuccessful. Here is my network setup with `nano .bashrc` #export ROS_IP=192.168.1.51 #export ROS_MASTER_URI=http://192.168.1.105:11311 export ROS_IP=192.168.1.51 export ROS_MASTER_URI=http://CPR-SMIT03:11311 and with `nano /etc/hosts` 127.0.0.1 localhost 127.0.1.1 ubuntu 192.168.1.105 CPR-SMIT03 #The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Is there any restrictions that was brought about through the use of VMware Workstation? Would be nice to have any advice on the issue I have on hand, as well as have any suggestions on solving my problem. I have been unable to use the camera with Skype due to the lack of driver suport, so unable to test if it's a functioning camera.

Convert geometry_ msgs/Pose to Point

$
0
0
Hey all!, i got a Pose`std::vector waypoints` and now i would like to transform it to geometry_msgs::Point. I tried to convert it to Eigen and afterwards in Point. My first step would be something like: std::vector m; m = waypoints; Eigen::Affine3d e; tf::poseMsgToEigen(m,e); and i get an error: no matching function for call to poseMsgToEigen. Do you have any suggestions ? Greetings mutu

How to solve the conflict among "/joint_states",urdf and tf's transform

$
0
0
Hi, all! My question are from [this question](http://answers.ros.org/question/249058/a-problem-about-the-conversion-from-euler-angle-to-quaternion/) published before. You don't need to read that question. Because I have found that this maybe totally because multiple postion values in the message [/joint_states](http://docs.ros.org/api/sensor_msgs/html/msg/JointState.html) which is published by [joint_state_publisher](http://wiki.ros.org/joint_state_publisher#Default_value) and received by [robot_state_publisher](http://wiki.ros.org/robot_state_publisher). In a word, we are constructing a 3D scanning hardware system, in which a dynamixel motor rotates a Hokuyo scanner to obtain a 3D map. We use urdf and package robot_state_publisher to construct the tf tree [as this tutorial say](http://wiki.ros.org/urdf/Tutorials/Using%20urdf%20with%20robot_state_publisher)s. But from the display of 3D models and the tf tree, the revolute joint where the motor rotates the scanner couldn't work normally. As [this question](http://answers.ros.org/question/249058/a-problem-about-the-conversion-from-euler-angle-to-quaternion/) said, we found that the frame_id "HN07_N101" ( the revolute joint on the Dynamixel motor) which rotates around its own blue "Z" axis couldn't be displayed normally. From the displayed tf tree in rviz, It seems that the frame "HN07_N101" couldn't roate smoothly. We check the topic "/joint_states" as follows: hodor@hodor:~$ rostopic echo /joint_states | grep position position: [0.0] position: [0.22242721424341796] position: [0.0] position: [-0.09050486648525283] position: [0.0] position: [-0.40803888957758055] ... position: [0.0] position: [-1.5677283652191252] position: [0.0] It seems that there is conflict between the message published by joint_state_publisher( zero value) and the message published by [our project](http://answers.ros.org/question/249058/a-problem-about-the-conversion-from-euler-angle-to-quaternion/). This conflict leads two different position value in the topic "/joint_states" which possible causes the unsmooth rotation of frame "HN07_N101". My launch file is : Does anybody know how to solve this confict? Best Regards! Thank you!

tf2_ros buffer transform PointStamped?

$
0
0
I'd like to transform a `PointStamped` using a python `tf2_ros.Buffer`, but so far I'm only getting type exceptions. The following example a point with only a y component is supposed to be trivially transformed into the frame it is already in: #!/usr/bin/env python import rospy import sys import tf2_ros from geometry_msgs.msg import PointStamped if __name__ == '__main__': rospy.init_node('transform_point_tf2') tf_buffer = tf2_ros.Buffer() tf_listener = tf2_ros.TransformListener(tf_buffer) rospy.sleep(1.0) pt = PointStamped() pt.header.stamp = rospy.Time.now() pt.header.frame_id = "map" pt.point.x = 0.0 pt.point.y = 1.0 pt.point.z = 0.0 try: pt2 = tf_buffer.transform(pt, "map") except: # tf2_ros.buffer_interface.TypeException as e: e = sys.exc_info()[0] rospy.logerr(e) sys.exit(1) rospy.loginfo(pt2) This results in: [ERROR] [/transform_point_tf2] [./transform_point_tf2.py]:[27] [] If PointStamped isn't the right type, then what type can I use? To do this manually I'm currently doing this: trans = self.tf_buffer.lookup_transform("map", target_frame, rospy.Time()) quat = [trans.transform.rotation.x, trans.transform.rotation.y, trans.transform.rotation.z, trans.transform.rotation.w] mat = tf.transformations.quaternion_matrix(quat) pt_np = [pt.point.x, pt.point.y, pt.point.z, 1.0] pt_in_map_np = numpy.dot(mat, pt_np) pt_in_map.x = pt_in_map_np[0] pt_in_map.y = pt_in_map_np[1] pt_in_map.z = pt_in_map_np[2]

No module named '_tf' error when doing "writing a tf broadcaster (py) tutorial

$
0
0
Hi all Below is the error I'm getting when I try to launch the launch file using the command roslaunch learning_tf start_demo.launch
File "/home/cj/catkin_ws/src/learning_tf/nodes/turtle_tf_broadcaster.py", line 6, in  
import tf
  File "/opt/ros/indigo/lib/python2.7/site-packages/tf/__init__.py", line 28, in 

    from _tf import *
ImportError: No module named '_tf' 
I tried updating using pacman -Syu and tried reinstalling the "tf" package but the result is the same.
I'm still a beginner and appreciate if someone can help me to correct this error. PS.
I followed the steps given in this [tutorial ](http://wiki.ros.org/tf/Tutorials/Writing%20a%20tf%20broadcaster%20%28Python%29) This error occurs when I try to do the roslaunch learning_tf start_demo.launch step.

there is link in tf tree,but rviz report error

$
0
0
Hello everyone, I have build a tf tree ,when I run tf view_frames I can see the connection from map->odom->base_link->camera_link, but when I launch rviz and set the fixed frame to map, I can't add the octomapgrid topic, i.e. I can't load an octomap into rviz, and rviz said there is no transform from map to base_link.But when I run tf_echo map base_link, there do exist the connection,any ideas of solving this problem?

Best way to get FK for many different joint-states

$
0
0
Hello! I am trying to calculate the end-effector pose for a huge amount of different JointStates. The two ingredients to do that are my URDF and the numpy array that contains the joint angles. I am currently using ros indigo + moveit on a regular (but fairly fast) notebook. In my python script I use a joint_state_publisher to publish the JointState and the move_group API to get_current_pose(). Unfortunately this is very slow... If I don't wait long enough (around 0.1 s) between publishing and getting the pose, it will return the previous pose instead of using the new JointState. I am not sure if the problem is the JointState Message being delayed or if the call to the move_group and subsequently tf takes so long. My goal would be something between 1 kHz and a MHz. My two questions are: 1. Is there an easy or elegant method to get the Pose only if the new one is there? 2. Is there a more direct way to calculate the cartesian position given the joint states? The package arm_kinematics looks promising but it's still from Fuerte and does not seem to be developed anymore. Here is a simplified version of my code: #!/usr/bin/env python import sys import rospy import numpy import copy import moveit_commander import moveit_msgs.msg import geometry_msgs.msg import sensor_msgs.msg import matplotlib.pyplot as plt from sensor_msgs.msg import JointState from std_msgs.msg import Header from std_msgs.msg import String moveit_commander.roscpp_initialize(sys.argv) robot = moveit_commander.RobotCommander() scene = moveit_commander.PlanningSceneInterface() group = moveit_commander.MoveGroupCommander("whole_arm") pub = rospy.Publisher('joint_states', JointState, queue_size=10) rospy.init_node('joint_state_publisher') pose_list = [] ## #... joint_state_list is created and filled with JointState Objects ## for next_joint_state in joint_state_list: next_joint_state.header.stamp = rospy.Time.now() pub.publish(next_joint_state) rospy.sleep(0.1) current_pose = group.get_current_pose() pose_list.append(copy.deepcopy(current_pose)) Thank you very much!

Using tf::conversions for quaternion

$
0
0
Hi there, I am failing to transform a tf::Quaternion into a Eigen::Quaterniond. What I have is tf::Quaternion sensor_rotation; Eigen::Quaterniond rotation; This is working. I now want to use the method tf::quaternionTFToEigen(sensor_rotation, rotation); This gives me the following error: undefined reference to `tf::quaternionTFToEigen(tf::Quaternion const&, Eigen::Quaternion&)' collect2: error: ld returned 1 exit status What I have done so far: included: #include "tf/transform_datatypes.h" #include "tf_conversions/tf_eigen.h" #include "Eigen/Core" #include "Eigen/Geometry" In CMakeLists.txt, I added: list(APPEND CMAKE_MODULE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/cmake) find_package (cmake_modules REQUIRED) find_package (Eigen REQUIRED) catkin_package(DEPENDS Eigen) include_directories(${Eigen_INCLUDE_DIRS}) target_link_libraries(${Eigen_LIBRARIES}) In package.xml, I added: cmake_modulestf_conversioncmake_modulestf_conversion Does anyone know what else I have to do to make this work? Thanks in advance!

[URG+hector_slam]Error on rviz

$
0
0
Hello, I am Tanaka Japanese. I have something I want to ask you. Using URG + hector_slam, I want to make Map. But there are two error on rviz. https://github.com/DaikiMaekawa/hector_slam_example. ↑ I am using "hector_hokuyo_launch" in this site. ----error 1 on rviz---- In LaserScan Status::Error, Transform:「For frame [laser]:Fixed Frame [map] does not exit」 ----error 2 on rviz---- In Global Status, Fixed Frame::「No tf data.Actual error:Fixed Frame [map] does not exist」 ----Usage environment---- OS: ubuntu14.04 Version: indigo LRF: UTM-30X ----Log is as follows---- ~/catkin_ws/src/hector-slam-example/launch$ roslaunch hector_hokuyo.launch ... logging to /home/tanaka/.ros/log/d507a8d6-96f2-11e6-89b5-fcf8ae0bcc05/roslaunch-tanaka-Endeavor-NJ5900E-11009.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://127.0.0.1:49774/ SUMMARY ======== PARAMETERS  * /base_frame: base_frame  * /hector_geotiff_node/draw_background_checkerboard: True  * /hector_geotiff_node/draw_free_space_grid: True  * /hector_geotiff_node/geotiff_save_period: 0.0  * /hector_geotiff_node/map_file_base_name: hector_slam_map  * /hector_geotiff_node/map_file_path: /opt/ros/indigo/s...  * /hector_geotiff_node/plugins: hector_geotiff_pl...  * /hector_mapping/advertise_map_service: True  * /hector_mapping/base_frame: base_footprint  * /hector_mapping/laser_z_max_value: 1.0  * /hector_mapping/laser_z_min_value: -1.0  * /hector_mapping/map_frame: map  * /hector_mapping/map_multi_res_levels: 2  * /hector_mapping/map_resolution: 0.05  * /hector_mapping/map_size: 2048  * /hector_mapping/map_start_x: 0.5  * /hector_mapping/map_start_y: 0.5  * /hector_mapping/map_update_angle_thresh: 0.06  * /hector_mapping/map_update_distance_thresh: 0.4  * /hector_mapping/odom_frame: nav  * /hector_mapping/pub_map_odom_transform: True  * /hector_mapping/scan_subscriber_queue_size: 5  * /hector_mapping/scan_topic: scan  * /hector_mapping/tf_map_scanmatch_transform_frame_name: scanmatcher_frame  * /hector_mapping/update_factor_free: 0.4  * /hector_mapping/update_factor_occupied: 0.9  * /hector_mapping/use_tf_pose_start_estimate: False  * /hector_mapping/use_tf_scan_transformation: True  * /hector_trajectory_server/source_frame_name: /base_link  * /hector_trajectory_server/target_frame_name: /map  * /hector_trajectory_server/trajectory_publish_rate: 0.25  * /hector_trajectory_server/trajectory_update_rate: 4.0  * /map_frame: map  * /odom_frame: odom  * /pub_map_odom_transform: True  * /rosdistro: indigo  * /rosversion: 1.11.20 NODES   /     base_2_nav_link (tf/static_transform_publisher)     base_footprint_2_base_link (tf/static_transform_publisher)     base_frame_2_laser_link (tf/static_transform_publisher)     base_link_2_base_stabilized_link (tf/static_transform_publisher)     base_stablized_2_base_frame (tf/static_transform_publisher)     hector_geotiff_node (hector_geotiff/geotiff_node)     hector_mapping (hector_mapping/hector_mapping)     hector_trajectory_server (hector_trajectory_server/hector_trajectory_server)     hokuyo_node (hokuyo_node/hokuyo_node)     map_2_odom (tf/static_transform_publisher)     odom_2_base_footprint (tf/static_transform_publisher)     rviz (rviz/rviz) ROS_MASTER_URI=http://127.0.0.1:11311 core service [/rosout] found process[hokuyo_node-1]: started with pid [11027] process[map_2_odom-2]: started with pid [11028] process[odom_2_base_footprint-3]: started with pid [11029] process[base_footprint_2_base_link-4]: started with pid [11041] process[base_link_2_base_stabilized_link-5]: started with pid [11051] process[base_stablized_2_base_frame-6]: started with pid [11058] process[base_frame_2_laser_link-7]: started with pid [11073] process[base_2_nav_link-8]: started with pid [11087] process[rviz-9]: started with pid [11100] process[hector_mapping-10]: started with pid [11113] process[hector_trajectory_server-11]: started with pid [11135] process[hector_geotiff_node-12]: started with pid [11142] [ INFO] [1477551836.679370193]: Waiting for tf transform data between frames /map and /base_link to become available HectorSM map lvl 0: cellLength: 0.05 res x:2048 res y: 2048 HectorSM map lvl 1: cellLength: 0.1 res x:1024 res y: 1024 [ INFO] [1477551836.738564037]: HectorSM p_base_frame_: base_footprint [ INFO] [1477551836.738638293]: HectorSM p_map_frame_: map [ INFO] [1477551836.738663259]: HectorSM p_odom_frame_: nav [ INFO] [1477551836.738688734]: HectorSM p_scan_topic_: scan [ INFO] [1477551836.738715540]: HectorSM p_use_tf_scan_transformation_: true [ INFO] [1477551836.738743303]: HectorSM p_pub_map_odom_transform_: true [ INFO] [1477551836.738780060]: HectorSM p_scan_subscriber_queue_size_: 5 [ INFO] [1477551836.738808260]: HectorSM p_map_pub_period_: 2.000000 [ INFO] [1477551836.738833718]: HectorSM p_update_factor_free_: 0.400000 [ INFO] [1477551836.738858272]: HectorSM p_update_factor_occupied_: 0.900000 [ INFO] [1477551836.738882714]: HectorSM p_map_update_distance_threshold_: 0.400000 [ INFO] [1477551836.738907848]: HectorSM p_map_update_angle_threshold_: 0.060000 [ INFO] [1477551836.738932192]: HectorSM p_laser_z_min_value_: -1.000000 [ INFO] [1477551836.738956542]: HectorSM p_laser_z_max_value_: 1.000000 [ INFO] [1477551836.756671736]: Successfully initialized hector_geotiff MapWriter plugin TrajectoryMapWriter. [ INFO] [1477551836.756946445]: Geotiff node started Please advise me. I would appreciate it if you would give me some advice. ![image description](https://16970802734400014542.googlegroups.com/attach/7c6e555aa5929/pic3.png?part=0.1&vt=ANaJVrGIIfB3zHDqxWzKXoNWjHfq89sAGkXLntw6uIxpFMH9ovm2ZKqm-ZGwWx1yoEwIpR2_OZZnV_LgM6QeSMccUIkvAhI0B9sci8FW5Rd7KOkrvM2yQw4) ![image description](https://16970802734400014542.googlegroups.com/attach/7c2b7b9736a5e/pic2-1.png?part=0.2&vt=ANaJVrH_wofFT2m-DTHEn7IJNoUeQ-6Ua5wZnsSRdx1jgQMUe0tM6QSja8scjB4hozNCNZDjFlJKUope-lSpyuJwzWyN3UEeTW_tS0qatp7xayr7uf3UP-A) ![image description](https://16970802734400014542.googlegroups.com/attach/7c2b7b9736a5e/pic1.png?part=0.1&view=1&vt=ANaJVrG1uuYMFHM_49woa1zXsdRIfuoqdf5mqFiecB7VqJPTK_hGyMcnMdwhy-8r_QAJGiTPua5MXprTUANHQoytyPAyD_le1rLfHxU3EqkXKWJQbvU6Jds)

How to use CRSM SLAM with a Turtlebot?

$
0
0
Hi!!! I'm trying to use CRSM SLAM with a Turtlebot in simulation, but when I run it in the console appears the next error: `[ERROR] [1482804974.498150696, 10.510000000]: [CrsmSlam] Error in tf : "base_link" passed to lookupTransform argument target_frame does not exist.`. When I visualize the mapping in Rviz, the "Robot Model" and the "Laser Scan" menu appear in red. I changed the map topic to "slam/occupancyGridMap" and the map building begins but the Robot Model says "`No transform from []`" in all the links, and the Laser Scan says "`For frame [/camera_depth_frame]: No transform to fixed frame [map]. TF error: [Could not find a connection between 'map' and 'camera_depth_frame' because they are not part of the same tree. Tf has two or more unconnected trees.]`". I also changed the "Fixed Frame" from `map` to `base_footprint` and the Robot Model and the Laser Scan menu is OK but the "Map" menu goes with this error `No transform from [map] to [base_footprint]`. What can I do? I don't understand the TF package yet. I hope that you can help me. Thanks in advance!!!

Why is it necessary to declare ros transform broadcaster in main?

$
0
0
Hi, I was trying to broadcast a tf in ros. #include #include #include #include ros::Timer robot_pose_broadcaster; void robotPoseCallBack(const ros::TimerEvent& eventalpha) { tf::TransformBroadcaster robotPoseBroadcaster; geometry_msgs::TransformStamped robotPose; robotPose.header.stamp = ros::Time::now(); robotPose.header.frame_id = "/map"; robotPose.child_frame_id = "/base_link"; robotPose.transform.translation.x = 0.0; robotPose.transform.translation.y = 0.0; robotPose.transform.translation.z = 0.0; robotPose.transform.rotation.x = 0.0; robotPose.transform.rotation.y = 0.0; robotPose.transform.rotation.z = 0.0; robotPose.transform.rotation.w = 0.9999; robotPoseBroadcaster.sendTransform(robotPose); ROS_INFO("DONE POSE BROADCASTING"); } int main(int argc, char** argv) { ros::init(argc, argv, "od_transform"); ros::NodeHandle n; robot_pose_broadcaster = n.createTimer(ros::Duration(.1), robotPoseCallBack); // tf::TransformBroadcaster aphabetagamma; ros::spin(); return 0; } This code broadcast nothing. But when I uncomment the below line. tf::TransformBroadcaster aphabetagamma; This code publishes transform between map and base_link even though I never use variable aphabetagamma. Can someone help me understand this. Thanks.

How to guarantee tf last transformation in skeleton tracker

$
0
0
Hello all, I am programming a gesture recognition node using the skeleton TF tree provided by the node openni_tracker (using the depth information of a Kinect at 30 FPS). Under certain conditions (slight body angles not facing perfectly the Kinect sensor) the tf output seems to be noisy and the algorithm fails to recognize the gestures. The original code to read the tf tree and construct the gesture vector is as follows: int ReadTransform(std::string targetFrame, std::string sourceFrame, std::vector&transformVector) { tf::StampedTransform transform; transformVector.clear(); try { listener->waitForTransform(targetFrame, sourceFrame,ros::Time(0),ros::Duration(0.3)); listener->lookupTransform(targetFrame, sourceFrame, ros::Time(0), transform); } catch (tf::TransformException &ex) { ROS_ERROR("%s",ex.what()); ros::Duration(1.0).sleep(); return -1; } transformVector.push_back(transform.getOrigin().x()); transformVector.push_back(transform.getOrigin().y()); transformVector.push_back(transform.getOrigin().z()); transformVector.push_back(transform.getRotation().x()); transformVector.push_back(transform.getRotation().y()); transformVector.push_back(transform.getRotation().z()); transformVector.push_back(transform.getRotation().w()); return 0; } int ReadTransforms(std::vector&gestureVector) { std::vector transformVector; std::string user = boost::lexical_cast(calibratedUser); gestureVector.clear(); if(ReadTransform("left_hand_" + user,"torso_" + user, transformVector) < 0) return -1; InsertPointToGesture(transformVector, gestureVector); if(ReadTransform("left_elbow_" + user,"torso_" + user, transformVector) < 0) return -1; InsertPointToGesture(transformVector, gestureVector); if(ReadTransform("left_shoulder_" + user,"torso_" + user, transformVector) < 0) return -1; InsertPointToGesture(transformVector, gestureVector); if(ReadTransform("neck_" + user,"torso_" + user, transformVector) < 0) return -1; InsertPointToGesture(transformVector, gestureVector); if(ReadTransform("head_" + user,"torso_" + user, transformVector) < 0) return -1; InsertPointToGesture(transformVector, gestureVector); if(ReadTransform("right_shoulder_" + user,"torso_" + user, transformVector) < 0) return -1; InsertPointToGesture(transformVector, gestureVector); if(ReadTransform("right_elbow_" + user,"torso_" + user, transformVector) < 0) return -1; InsertPointToGesture(transformVector, gestureVector); if(ReadTransform("right_hand_" + user,"torso_" + user, transformVector) < 0) return -1; InsertPointToGesture(transformVector, gestureVector); return 0; } Pros: The return from "lookupTransform" is immediate, very fast. Cons: I do not have guaranteed the latest transformation. Could be possible to have many tf transformations queued and read tranforms occurred some seconds ago? I have modified the listener part: transformVector.clear(); try { ros::TIme t = ros::Time::now() listener->waitForTransform(targetFrame, sourceFrame, t, ros::Duration(0.3)); listener->lookupTransform(targetFrame, sourceFrame, t, transform); } Pros: Last transform guaranteed. Cons: MUCH SLOWER (almost 0.5 s to perform eight transforms and construct the gesture vector) Is there an alternative way to get the last transformation available in tf while increasing the speed? Thank you all very much, Alberto

How to run "tf view_frames" for a remapped tf

$
0
0
I remapped `/tf` to `/tf_0`, but I can't get `rosrun tf view_frames` to show me the tree for `/tf_0`. How to do it?

map frame does not exist and sim time true

$
0
0
Hello, I am using V-Rep and I have created a scene with a robot publishing laser scan data and odometry data. Then I run slam-gmapping in order to create the map and visualize it in r-viz. I also produce the TF-tree in order to see if the transforms relation is correct. The odom, base_link and base_scan frames are broadcasted through vrep whereas the map frame is broadcasted through the slam-gmapping node. When I use the parameter **enable sim time true** it seems that the map frame is missing from the tf-tree. When I use the parameter **enable sim time false** the map frame is there. Has anyone any idea why is that happening?
Viewing all 844 articles
Browse latest View live