Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 844 articles
Browse latest View live

recieve all data from tf message

$
0
0
`listener.lookupTransform('/frame1', '/frame2', rospy.Time(0))` function works with 2 frames, so if i need to know all frames positions, relative to frame1, i have to call "lookupTransform" for each frame? Is that right way? I mean, if i have 10 frames, i should call lookupTransform 9 times, is this normal to performance?

Gazebo and move_base problem

$
0
0
Hello I'm using Ubuntu 12.04 and ROS fuerte I' simulating SLAM using gazebo and I followed Learning ROS book. Now I reached move_base.launch but when I run it, it gave the following errors: [ WARN] [1432083250.106400281, 919.561000000]: Waiting on transform from /base_footprint to /map to become available before running costmap, tf error: [ WARN] [1432083251.263770834, 923.948000000]: No laser scan received (and thus no pose updates have been published) for 923.948000 seconds. Verify that data is being published on the /scan topic. Although, I proved that the laserscans are published by using: rostopic echo scan This is move_base file: and this is my tf tree: ![tf tree](/upfiles/14320843565230808.png) Any idea? Thank you.

update system, robot_state_publisher process always die

$
0
0
Hi, all I have meet a strange problem. I just update something in my system yesterday under the upgrade pushing notify and also used the command sudo apt-get update , But today all my ros project with rviz simulation can not work. These ros project do work well before.The phenomenon can be seen as follows. system:ROS indigo robot_state_publisher : ros-indigo-robot-state-publisher ,version: 1.10.4-0trusty-20150430-1718-+0000 joint_state_publisher: ros-indigo-joint-state-publisher, version:1.11.7-0trusty-20150429-0827-+0000 1. roslaunch SIA20_moveit_config moveit_planning_execution.launch started roslaunch server http://XXX-ThinkPad-T500:47915/ SUMMARY ======== PARAMETERS * /controller_joint_names: ['joint_s', 'join... * /mongo_wrapper_ros_XXX_ThinkPad_T500_9935_3121796978277564489/database_path: /home/XXX/industr... * /mongo_wrapper_ros_XXX_ThinkPad_T500_9935_3121796978277564489/overwrite: False * /move_group/allow_trajectory_execution: True * /move_group/capabilities: move_group/MoveGr... * /move_group/controller_list: [{'default': True... * /move_group/controller_manager_name: pr2_controller_ma... * /move_group/controller_manager_ns: pr2_controller_ma... .............................. [robot_state_publisher-4] process has died [pid 10035, exit code -11, cmd /home/XXX/catkin_ws/devel/lib/robot_state_publisher/robot_state_publisher __name:=robot_state_publisher __log:=/home/XXX/.ros/log/72142730-009f-11e5-9cf8-00226811834c/robot_state_publisher-4.log]. log file: /home/XXX/.ros/log/72142730-009f-11e5-9cf8-00226811834c/robot_state_publisher-4*.log the moveit rviz GUI can be seen as followed.the GUI shows Global Status:warn Fixed frame: no tf data. Actual error: Fixed Frame[base_link] does not exist** But this is warning is incredible,you know the project can work well before, then I recheck the *.srdf file. I have fixed the base_link: virtual_joint name="FixedBase" type="fixed" parent_frame="world" child_link="base_link" I have no idea about that,So, I check the system in-build pr2's demo.launch. the pr2 can also work well before 2. roslaunch pr2_moveit_config demo.launch process[robot_state_publisher-4]: started with pid [27667] [ WARN] [1432314173.999282162]: TF to MSG: Quaternion Not Properly Normalized [robot_state_publisher-4] process has died [pid 27667, exit code -11, cmd /home/XXX/catkin_ws/devel/lib/robot_state_publisher/robot_state_publisher __name:=robot_state_publisher __log:=/home/XXX/.ros/log/21c9d20c-00a4-11e5-bb8c-00226811834c/robot_state_publisher-4.log]. log file: /home/XXX/.ros/log/21c9d20c-00a4-11e5-bb8c-00226811834c/robot_state_publisher-4*.log [robot_state_publisher-4] restarting process process[robot_state_publisher-4]: started with pid [27692] [ WARN] [1432314174.317632982]: TF to MSG: Quaternion Not Properly Normalized [robot_state_publisher-4] process has died [pid 27692, exit code -11, cmd /home/XXX/catkin_ws/devel/lib/robot_state_publisher/robot_state_publisher __name:=robot_state_publisher __log:=/home/XXX/.ros/log/21c9d20c-00a4-11e5-bb8c-00226811834c/robot_state_publisher-4.log]. log file: /home/XXX/.ros/log/21c9d20c-00a4-11e5-bb8c-00226811834c/robot_state_publisher-4*.log [robot_state_publisher-4] restarting process........................ what is the matter with this. I have searched the internet try my best, but no problems seem like this. and I have found that the joint_state_publisher is a new version,the GUI of joint_state_publisher contains Randomize What shall I do........,should I reinstall the whole ros system???

NameError: global name 'ms' is not defined after running learning_tf start_demo.launch

$
0
0
Hi. Running through the tf tutorials in sequence and currently at [tfTutorialsWriting a tf listener (Python)](http://wiki.ros.org/tf/Tutorials/Writing%20a%20tf%20broadcaster%20%28Python%29). Run roslaunch learning_tf start_demo.launch and get following errors: [ERROR] [WallTime: 1432379077.642128] bad callback: Traceback (most recent call last): File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/topics.py", line 709, in _invoke_callback cb(msg, cb_args) File "/home/andrew/catkin_ws/src/learning_tf/nodes/turtle_tf_broadcaster.py", line 12, in handle_turtle_pose tf.transformations.quaternion_from_euler(0, 0, ms.theta), NameError: global name 'ms' is not defined Can see the 2 turtles appearing in the window. Can move 1 turtle via arrow keys but 2nd turtle does not follow turtle 1 Note: have run apt-get update and lots of ROS packages updated.

Turtlebot not moving according to input (in simulation and reality)

$
0
0
Hello, I'm using a Turtlebot with Kobuki base and running Ros Hydro on Ubuntu 12.04. I need the Turtlebot to move in the shape of a square and guiding myself by this codes: [albany](https://code.google.com/p/albany-ros-pkg/source/browse/branches/seminar2011/ex_move/bin/square_with_tf.py) , [turtlebot motion](https://www.youtube.com/watch?v=bqJ7O5Xt9jc) and [turtlebot tutorials](https://github.com/markwsilliman/turtlebot), I put together a code that only sends the appropriate cmd_vel messages to make the robot move at a certain speed for a chosen time and another one that uses tf information from odom->base_footprint to make the robot move a chosen distance at a certain speed. I was first testing the codes on turtlesim and they seemed to work well, but when I tested them on the gazebo turtlebot simulator and on the real turtlebot this was the outcome (obtained from the /odom topic on rviz): Gazebo Simulation - From the code that doesn't use tf: ![Gazebo Simulation - From the code that doesn't use tf](/upfiles/14327375955978795.png) Gazebo Simulation - From the code that uses the odom->base_footprint tf: ![Gazebo Simulation - From the code that uses the odom->base_footprint tf:](/upfiles/1432737660546709.png) Real Turtlebot - From the code that doesn't use tf: ![Real Turtlebot - From the code that doesn't use tf](/upfiles/1432737897538616.png) Real Turtlebot - From the code that uses the odom->base_footprint tf: ![Real Turtlebot - From the code that uses the odom->base_footprint tf](/upfiles/14327378815518645.png) From what I've read the odom->base_footprint tf uses the wheel odometry and IMU data and the kobuki base already comes with a calibrated gyro, so I don't think the problem is in the odometry tf. Also, the movement of the turtlebot with the code that doesn't use the tf looks similar to the one the /odom topic shows. The result from the real turtlebot is terrible, but I was surprised that even in the simulation the code doesn't work well. I don't really know how to improve this results, because I expected that the way to improve the first code was to use the one that implements the tf and that one is the one that works the worst. Am I misusing the tfs or am I missing something particular about the turtlebot with the messages I'm sending? at least in the first code it looks as if the robot doesn't move as much as the speed would indicate but the tf should be indifferent to that. I read the similar questions in the FAQs but none of them help me solve this. Thank you very much in advance and sorry for the rviz plots, I didn't find another way to get the trajectory of the robot. ---------- **The two codes are below:** ---------- Code that only sends the appropriate cmd_vel messages to make the robot move at a certain speed for a chosen time: #!/usr/bin/env python # http://library.isr.ist.utl.pt/docs/roswiki/rospy%282f%29Overview%282f%29Time.html """ Example code of how to move a robot around the shape of a square. """ import roslib import rospy import math import time from geometry_msgs.msg import Twist class square: """ This example is in the form of a class. """ def __init__(self): """ This is the constructor of our class. """ # register this function to be called on shutdown rospy.on_shutdown(self.cleanup) # publish to cmd_vel self.p = rospy.Publisher('cmd_vel', Twist) #twist = Twist() # give our node/publisher a bit of time to connect rospy.sleep(1) # slow rate, 2 seconds sleep r = rospy.Rate(0.5) for i in range(4): # create a Twist message, fill it in to drive forward twist = Twist() start = rospy.get_time() print(rospy.get_time()) while rospy.get_time()-start<4: twist.linear.x = 0.25; #move 0.25 m/s twist.linear.y = 0; twist.linear.z = 0; twist.angular.x = 0; twist.angular.y = 0; twist.angular.z = 0; self.p.publish(twist) rospy.loginfo("Moving the robot forward.") r.sleep() twist = Twist() start = rospy.get_time() print(rospy.get_time()) while rospy.get_time()-start<2: twist.linear.x = 0; twist.linear.y = 0; twist.linear.z = 0; twist.angular.x = 0; twist.angular.y = 0; twist.angular.z = math.pi/4.0; #move pi/4 rad/sec self.p.publish(twist) rospy.loginfo("Turning the robot") r.sleep() def cleanup(self): # stop the robot! twist = Twist() self.p.publish(twist) if __name__=="__main__": rospy.init_node('control', anonymous=True) square() This code uses tf information from odom->base_footprint to make the robot move a chosen distance at a certain speed: #!/usr/bin/env python """ Example code of how to move a robot around the shape of a square, using tf. """ # we always import these import roslib import rospy import tf import math # recall: robots generally take base movement commands on a topic # called "cmd_vel" using a message type "geometry_msgs/Twist" from geometry_msgs.msg import Twist class square: """ This example is in the form of a class. """ def __init__(self): """ This is the constructor of our class. """ print 'inwut' # register this function to be called on shutdown rospy.on_shutdown(self.cleanup) self.listener = tf.TransformListener() print 'k1' # publish to cmd_vel self.p = rospy.Publisher('cmd_vel', Twist) # give our node/publisher a bit of time to connect rospy.sleep(1.0) twist = Twist() #first = True for i in range(2): print 'hey' done = False first = 1 print first while not done: # create a Twist message, fill it in to drive forward twist.linear.x = 0.2; twist.angular.z = 0; self.p.publish(twist) rospy.loginfo("moving forward") try: ((new_x,y,z), rot) = self.listener.lookupTransform('odom', 'base_footprint', rospy.Time(0)) except (tf.LookupException, tf.ConnectivityException): rospy.logerr("AHHHH") #print 'mid!' if first: orig_x=new_x first=0 print 'first' done=bool(abs(new_x-orig_x)>0.8) print done print new_x-orig_x done = False first = 1 while not done: twist.angular.z = math.pi/4.0; #turns at pi/4 degrees/sec twist.linear.x=0; self.p.publish(twist) rospy.loginfo("turning") try: ((x,y,z), rot) = self.listener.lookupTransform('odom', 'base_footprint', rospy.Time(0)) except (tf.LookupException, tf.ConnectivityException): rospy.logerr("AHHHH") (phi, psi, theta) = tf.transformations.euler_from_quaternion(rot) if first: orig_ang=theta first=0 done = bool(abs(theta-orig_ang)>=math.pi/2.0) done = False first = 1 print first while not done: # create a Twist message, fill it in to drive forward twist.linear.x = 0.2; twist.angular.z = 0; self.p.publish(twist) rospy.loginfo("moving forward") try: ((x,new_y,z), rot) = self.listener.lookupTransform('odom', 'base_footprint', rospy.Time(0)) except (tf.LookupException, tf.ConnectivityException): rospy.logerr("AHHHH") #print 'mid!' if first: orig_y=new_y first=0 print 'first' done=bool(abs(new_y-orig_y)>0.8) print done print new_x-orig_x done = False first = 1 while not done: twist.angular.z = math.pi/16.0; twist.linear.x=0; self.p.publish(twist) rospy.loginfo("turning") try: ((x,y,z), rot) = self.listener.lookupTransform('odom', 'base_footprint', rospy.Time(0)) except (tf.LookupException, tf.ConnectivityException): rospy.logerr("AHHHH") (phi, psi, theta) = tf.transformations.euler_from_quaternion(rot) if first: orig_ang=theta first=0 done = bool(abs(theta-orig_ang)>=math.pi/2.0) def cleanup(self): # stop the robot! twist = Twist() self.p.publish(twist) if __name__=="__main__": rospy.init_node('square') try: square() except: pass

Incomplete topic data on remote machine

$
0
0
We are trying to use a networked rvis instance to view a map from hector_slam alongside /scan data from a SICK 310. The root of our issue seems to be inconsistent behaviour with receiving topics on the remote machine. We can see all topics/nodes, but are unable to echo/hz the topic in some cases. We can get info and see correct pub/sub info. In some cases we can echo partial data such as an incomplete tf tree. We can echo /tf and get a complete list when using the host (robot) via ssh etc, but the networked machine does not see the entire tf tree, only the laser and laser_mount_link transforms - ie missing /base_link, /map etc. I've got the visual outputs if people want them but it only shows that the remote receives partial data. This issue(s) mean we can successfully visualise scan data in rviz with a laser or laser_mount_link fixed frame, but have no other frames as the rviz machine never receives extra topic data... Ubuntu 14.04 on generic Aspire atom netbook (remote rviz) Lubuntu 14.04 on ODroid XU3 (host) Indigo on both machines, same networking infrastructure and both can ping each other etc. Thanks!

Laserscan tf to odom frame (RVIZ)

$
0
0
Hi at all out there, have a problem with a point that is displayed in RVIZ, but seems not to exist in our data, because all filtering efforts fail. We do the following steps: 1. A laserscanner subscriber delivers scan data 2. We transform the data in our odometry frame with the tf package 3. We throw away some points that are to far away for our calculation 4. We apply an algorithm on the remaining data. We tried different algorithms but the problem still remains. The point lies on the odometry frame origin.

How to update map to odom with corrections from SLAM ?

$
0
0
Our basic goal is to publish a map to odom transform that corrects for the drift of our odometry. We have a localization algorithm that uses landmarks to give an absolute position wrt the map frame. Our problem so far has been with how to best achieve this using the tf library. The strategy we are trying right now looks like: //Get newPose tf::Pose newPose = getPoseFromLandmarks(); //Get difference between current pose (which may be different than previous getPoseFromLandmarks because of odom updates) and newPose tf::Transform poseDifference = currentPose.inverse() * newPose; //Apply that difference to our current map to odom transfrom tf::Transform newMapToOdom = poseDifference * currentMapToOdom; //Now broadcast the transform and we're done This was working fine up until we started calculating rotation in getPoseFromLandmarks(). When we apply rotation to the newPose, the poseDifference as well as the newMapToOdom values are not as we expect. It seems that the rotation happens before the translation (e.g. {x,y,z of poseDifference} != {x,y,z currentPose} - {x,y,z newPose}). This usually causes the corrections to accumulate and our x,y,z quickly goes to infinity. We have tried every combination we could think of to make this work; right vs left multiply, changing what we inverse and where, calculating translation difference separately from rotation difference. The only successful strategy we have had is to manually calculate the difference in the pose origins, manually calculate the difference in the yaw of both poses (which works since we are only updating the robots yaw for now), and then manually apply these results to the currentMapToOdom transform. //Get the difference in translation and yaw poseDifference.setOrigin(newPose.getOrigin() - currentPose.getOrigin()); poseDifference.setRotation(tf::createQuaternionFromYaw(tf::getYaw(newPose.getRotation()) - tf::getYaw(currentPose.getRotation()))); //Apply the difference to correct mapToOdom newMapToOdom.setOrigin(currentMapToOdom.getOrigin() + poseDifference.getOrigin()); newMapToOdom.setRotation(tf::createQuaternionFromYaw(tf::getYaw(currentMapToOdom.getRotation()) + tf::getYaw(poseDifference.getRotation()))); While this works, it feels really hacky and won't work roll/pitch corrections. It seems like there should be a more elegant way to achieve this with TF math, or maybe a better strategy for calculating the corrected mapToOdom transform? Any help would be appreciated, thanks in advance. **Edit:** The second approach is correct way to solve the problem.

Calculate velocity of a moving frame from two different timestamp

$
0
0
Hi everyone, I have a moving frame (let's call it /moving) that is obviously moving w.r.t. some other frame (e.g. /world). With this setup, I can calculate the speed of the "moving frame" from two different timestamps in a trivial way like listener.lookupTransform("moving","world", new_timestamp, t1) listener.lookupTransform("moving","world", old_timestamp, t2) delta_time = new_timestamp - old_timestamp t3=t2.inverseTimes(t1) //the position displacement translational_velocity = t3.getOrigin() / delta_time.toSec() My question is: can I use the lookupTwist method to evaluate the DELTA speed between two different times (but same frame)? I mean, listener->lookupTwist("moving","world", TIMESTAMP, ros::Duration(0.1), frame_speed); gives the velocity w.r.t. the fixed frame, not the deltas. Ideas? Thank you :-) Augusto

Best practice: tf::Listener should be a global or local variable?

$
0
0
Hi there, I'm wondering: do tf::Listener or tf::Broadcaster variables take time to fire-up (e.g. because of topic subscription)? It would be more convenient for me to declare a new tf::Listener whenever I need to lookup a tf (i.e. as a local variable), but I wonder if there are some performance issues in doing this.

tf configuration issues with ccny_rgbd.

$
0
0
Hello everyone, The ccny_rgbd package is compiled and ready to go, (Ubuntu 12.04, ROS-Hydro). Right now I run it according to the readme, with publish cloud set to true. I'm using a ROSbag with kinect data, so I have the device nodelet disabled. When I open up rviz and load the vo+mapping.rviz config file, the kinect frame seems to be moving around with respect to my world frame, and the other frames (openni_camera, optical_frame for rgb and depth) are at some transformation to the kinect frame. Apart from this however, there is nothing showing up in rviz. No path, no model, even after I fiddled with topics of the rviz visualization markers. The ROS wiki says it needs the transform between camera_link and the moving camera frame. From my understanding, this should just be a transform broadcaster publishing a fixed transform between camera_link and openni_camera OR kinect, but that doesn't seem to work. Can anyone tell me what the correct tf configuration is? Also, What are the corresponding changes in my rviz config file, considering the tf tree the rviz file has is between /odom and /camera_link ? [Also see this.](https://github.com/ccny-ros-pkg/ccny_rgbd_tools/issues/44) Thanks!

Robot_state_publisher dies on ARM-based Turtlebot

$
0
0
Whenever I run `roslanch turtlebot_bringup minimal.launch` I get the following error: [robot_state_publisher-2] process has died [pid 1597, exit code -11, cmd /opt/ros/indigo/lib/robot_state_publisher/robot_state_publisher __name:=robot_state_publisher __log:=/home/odroid/.ros/log/02472614-0afe-11e5-884e-c22209f25fe8/robot_state_publisher-2.log]. log file: /home/odroid/.ros/log/02472614-0afe-11e5-884e-c22209f25fe8/robot_state_publisher-2*.log The log file is empty, however. If I run `rosrun robot_state_publisher robot_state_publisher` I get a segmentation fault. When I compile and run `robot_state_publisher` using gdb, here's the seg fault info: Starting program: /home/odroid/catkin_ws/devel/lib/robot_state_publisher/robot_state_publisher [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1". Program received signal SIGSEGV, Segmentation fault. 0xb660bc74 in ?? () from /lib/arm-linux-gnueabihf/libpcre.so.3 I'd imagine this is because I'm running everything on an Odroid U3 with Ubuntu 14.04 and [ROS Indigo Ubuntu ARM](http://wiki.ros.org/indigo/Installation/UbuntuARM). Anyone else have this issue?

Using tf::transform_datatypes in python

$
0
0
The C++ [api](http://docs.ros.org/diamondback/api/tf/html/c++/transform__datatypes_8h.html) for `tf` provides many conversion utilities to convert messages to transforms (for example, `tf::poseMsgToTF`). Is there any way to use these within python? The closest I've found is to do: frame = tf_conversions.fromMsg(pose) But this returns a `PyKDL.Frame` object, not a transform. Ideally I'd like to go directly from a pose message to a `TF` transform.

registered depth image and RGB image frames

$
0
0
Hi everyone, I want to generate a XYZRGB point cloud based on the registered depth image and RGB image as described on [depth_image_proc/point_cloud_xyzrgb](http://wiki.ros.org/depth_image_proc?distro=jade) For this purpose I wrote a small launch file starting openni_launch and the mentioned nodelet. Of course, both inputs have a different frame - camera/rgb/image_rect_color -> Frame: /camera_rgb_optical_frame - camera/depth/image_rect -> Frame: /camera_depth_optical_frame but the tf transformations between are known based on the definition in openni.launch. I was suprised, when ROS was not able to solve this problem automatically ... I got Depth image frame id [/camera_depth_optical_frame] doesn't match RGB image frame id [/camera_rgb_optical_frame] What is the most effective way to overcome this problem, without a manual transformation of on of the messages? Looking forward to your hints Poseidonius

How do you use inverseTimes()?

$
0
0
How do you use inverseTimes? From reading the docs, it looks like it should be a member function of Transform(), and take another Transform() as the argument. However, the following code: from geometry_msgs.msg import Transform my_tf1 = Transform() my_tf2 = Transform() my_tf1.translation.x = 2 my_tf2.translation.x = 5 my_tf1.inverseTimes(my_tf2) produces the error: > AttributeError: 'Transform' object has no attribute 'inverseTimes' Is there a different object in tf called Transform that I should be using instead?

Stack/package robot_setup_tf not found

$
0
0
Hi all. I'm new to ROS. I'm tasked to Hokuyo UST-20LX's laser scan to scan area. I'm stuck with the robot_setup_tf part. I have already set up my workspace environment with: ~/catkin_ws$ source devel/setup.bash and to confirm: ~/catkin_ws$ echo $ROS_PACKAGE_PATH I get: /home/gordon/catkin_ws/src:/opt/ros/groovy/share:/opt/ros/groovy/stacks Afterwards I did catkin_make and catkin_make install (no error) and tried to run: rosrun robot_setup_tf tf_broadcaster It gives me: [rospack] Error: stack/package robot_setup_tf not found I suspect the problems lies with the environment configuration (just my guess) and ran: ~/catkin_ws$ echo $ROS_PACKAGE_PATH and I got: /opt/ros/groovy/share:/opt/ros/groovy/stacks Any problems with the environment setup? If not, what is the problems with not being able to start up tf_broadcaster? Thanks!

Need help with setting up Hokuyo UST-20LX

$
0
0
Hi all, I'm new to ROS. I have installed hector_slam, urg_node and robot_setup_tf. I managed to run: rosrun urg_node urg_node _ip_address:="192.168.0.10" [ INFO] [1434430207.673776954]: Connected to network device with intensity and ID: H1404423 [ INFO] [1434430207.707885934]: Streaming data. And managed to start RVIZ with: roslaunch hector_slam_launch tutorial.launch However, I have problems with the robot_setup_tf which I asked in a separate thread at: **http://answers.ros.org/question/211427/stackpackage-robot_setup_tf-not-found/** Besides that, is there anymore nodes/configuration I need to do? Thanks!

How to put a launch file?

$
0
0
Hi all. I'm trying to create a launch file as part of Hokuyo TF. Credits to Malefitz for the codes. However, I'm wondering where should I copy the .launch file to. Thanks for any help!

Dynamically adding a static transformation to tf

$
0
0
I have written an object in C++ which contains position and orientation information in member variables. These variables are not likely to change over the lifetime of the object. I am using these variables to perform my own hard-coded transforms between the world and the frame defined by each object. I realize now that this is probably not the correct way to do things, and I should be using tf to do this work for me. My question is how do I setup a static frame for each object when it is constructed? I know I can create a static_transform_publisher node to do this, but I'm not too excited about creating a bunch of nodes during program operation that don't really do anything. It seems opposed to the ROS design philosophy. Is there some object I can create that will do this for me?

Need help with hector slam tf

$
0
0
Hi all. I have successful set up my Hokuyo UST-20LX to do laser scanning. I'm currently trying to use the hector_slam package now. However, I have problems regarding the tf. Using rosrun tf tf_monitor, Frames: Frame: /base_laser published by /robot_tf_publisher Average Delay: 0.000538933 Max Delay: 0.00112979 Frame: /base_link published by /odom_to_base_link Average Delay: -0.0995477 Max Delay: 0 Frame: /laser published by /US6 Average Delay: -0.0995496 Max Delay: 0 All Broadcasters: Node: /US6 9.99601 Hz, Average Delay: -0.0995496 Max Delay: 0 Node: /odom_to_base_link 9.99599 Hz, Average Delay: -0.0995477 Max Delay: 0 Node: /robot_tf_publisher 100.099 Hz, Average Delay: 0.000538933 Max Delay: 0.00112979 I tried to run hector_geotiff node, but got a warning, that is: [ WARN] [1435203912.226730957]: No transform between frames /map and /base_link available after 20.001726 seconds of waiting. This warning only prints once. That means I havent connect up /map and /base_link. I'm not sure how to do that. I tried looking for answers but to no avail. Also, I ran mapping_default.launch from hector_mapping package and got this: [ INFO] [1435206678.813741923]: lookupTransform base_footprint to /laser timed out. Could not transform laser scan into base_frame. My task is to map out area using hector slam, like this one here: https://www.youtube.com/watch?v=sieBqVxTz2c Does anyone knows what tf should I connect to achieve the final result? Thanks a million!
Viewing all 844 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>