Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 844 articles
Browse latest View live

Why does AMCL post-date tf (~transform_tolerance)?

$
0
0
A [comment in the code](https://github.com/ros-planning/navigation/blob/jade-devel/amcl/src/amcl_node.cpp#L1249-L1278) says> // We want to send a transform that is good up until a // tolerance time so that odom can be used And the [wiki page](http://wiki.ros.org/amcl#Parameters) says > ~transform_tolerance (double, default: 0.1 seconds) Time with which to post-date the transform that is published, to indicate that this transform is valid into the future. I still don't understand why AMCL needs to pretend to be able to see the future.

ROS transform/synchronization issues

$
0
0
Hello everyone, I have a robot equipped with a stereo camera for perception and an gyroscope/odometer setup for localization. We have different machines in our robot (one running the perception and the other the localization). For a correct transform of the detected objects using the localization we tried synchronizing our machines using chrony via ntp. In our code we then 'waitForTransform()' and 'lookupTransform()' in order to get the correct transform using the camera picture's time stamp. For some reason the objects seem to 'wander' when visualizing them meaning that they get transformed using a different orientation than at the time of the image, but 'ntpdate -q machine1' returns: server 10.1.1.102, stratum 9, offset -0.000060, delay 0.02617 11 Feb 17:13:38 ntpdate[22516]: adjust time server 10.1.1.102 offset -0.000060 sec which should be sufficient. Unfortunately I do not really know where to look or how to start debugging tf. Any ideas would be much appreciated! Greetings, sparkas

How to get 2 separate tf trees?

$
0
0
Hello! I am attempting to run slam gmapping for 2 robots. I am having a problem where the transform trees are conjoining instead of there being 2 separate trees. As a result, gmapping doesn't work. I think the problem is that map->odom->base_footprint don't have any sort of prefix to distinguish them to each other, so the 2nd robot connects to the 1st robot's map->odom->base_footprint. **I have already included namespaces and tf_prefixes in all of the launch files.** Where can I change the names of these generated tf nodes? Or is there another way I can manage this? ![a closeup of the issues in view_frames](/upfiles/15331442472609916.jpg)

TF Error: Lookup would requrie extrapolation into the future

$
0
0
I'm running rtabmap on a jackal robot with bumblebee2 stereo camera on ros-kinetic. I get the following error when trying to view pointcloud2 topic in rviz: [link text](https://imgur.com/a/rOdPHR9). When I view the tf tree [link text](https://imgur.com/a/7e3s1p2) I see that i'm getting -0.086 sec old from the map broadcast by rtabmap.

RVIZ shows odom frame fixed to map frame when running rtabmap

$
0
0
I've set the map frame as the fixed frame in RVIZ, and this is what it looks like when running rtabmap https://imgur.com/a/U5fJLLP . This is what my partial tf-tree looks like https://imgur.com/a/1s2PnV8. This is my launch file for rtabmap:

Could not get transform from odom to base_link

$
0
0
I'm running rtabmap with an rgb-d sensor and get the following error: We received odometry message, but we cannot get the corresponding TF odom->base_link at data stamp 1533265358.770186s (odom msg stamp is 1533265358.757750s). Make sure TF of odometry is also published to get more accurate pose estimation. This warning is only printed once. This is my launch file This is how my tf-tree is being published for some reason https://imgur.com/a/Wp4ZodK, and this is how I'm connecting my kinect with the base_link: rosrun tf static_transform_publisher 0 0 0.1 0 0 0 1 base_link camera_rgb_optical_frame 100 Lastly, I'm calling roslaunch openni_launch openni.launch depth_registration:=true To launch my rgbd node.

tf_prefix and "timeout expired" error

$
0
0
We are attempting to run multiple bots from one rosmaster, and as a result we are trying to use namespaces along with tf_prefixes. We recently updated our OpenCR firmware to 1.2.2 to accommodate this. However, when we follow the [Robotis turtlebot3 e-manual on loading multiple turtlebots](http://emanual.robotis.com/docs/en/platform/turtlebot3/applications/#load-multiple-turtlebot3s) , we get the error `Failed to get param: timeout expired` about our tf_prefix when we run the robot bringup launch file.The result is that none of our transforms get prefixes. Also, strangely enough, it *sometimes* works. Any ideas how to fix this so that it works every time? Thanks!

odom not aligned with base_link?

$
0
0
When viewing my robot in rviz, the odom link seems not to be aligned with the rest of the robot https://imgur.com/a/4exMRIa. What could causing this?

how to access old transforms tf

$
0
0
Let's say I have an algorithm that takes timestamped data and processes it, with the processing taking > 10 seconds. Not exactly blazing fast, I know, but for arguments sake let's say the code is well written and optimized :) If the tf buffer is only set to 10 seconds, then publishing or listening to tf with the original timestamp will fail. Is there a way to control the size of the tf buffer, or is tf not the tool for applying transforms in this case? Has anybody else ran into this issue? I would not be surprised if this came up for the UCB clothes folding app.

tf::TransformListener::transformPose [exception] target_frame does not exist

$
0
0
Hi, I am trying to transform `geometry_msgs::PoseStamped` in one frame to other. I am using `tf::TransformListener::transformPose` here is my code snippet. tf::TransformListener listener; geometry_msgs::PoseStamped pose_map; try{ listener.transformPose("map",pose_world,pose_map); // pose_world is in world frame } catch( tf::TransformException ex) { ROS_ERROR("transfrom exception : %s",ex.what()); } i am getting below exception. [ERROR] [1410527449.677789054, 1410185755.142016150]: transfrom exception : "map" passed to lookupTransform argument target_frame does not exist. But i can see tf between /world and /map from `rosrun tf view_frames` and also from `rosrun tf tf_echo world map` **Edit:** After adding wait for transform i am getting below exception. But /map to /world is publishing at 25hz from launch file. I am using bag file for tf data with --clock and use_sim_time true [ERROR] [1410539110.856303453, 1410185780.612246601]: transfrompose exception : Lookup would require extrapolation into the past. Requested time 1410185779.600078575 but the earliest data is at time 1410185779.862480216, when looking up transform from frame [world] to frame [map] Thank you.

Interpolation for between two poses

$
0
0
Hey all, I am looking for a function which interpolate between two poseStamped poses for a given time. I know there has to be something withing the tf/tf2 API but I could not find it. Something along the line of poseStamped = poseInterpolation (poseStamp_t1,poseStamp_t2, t) where t_1 < t < t_2 Would be glad for some pointers. Cheers

mapviz - transforming local xy coordinate frame to tile map lat/lon coordinate frame

$
0
0
I am running a tile map plugin for mapviz. As I move my mouse on the map I can see the GPS coordinates changing on the ui interface. I also used the swri_transform_util to give my robot a lat/lon xy origin. Now I am trying to use the point click publisher to get lat/lon coordinates when a place on the map is clicked. I am not sure what the tile map frame is however. I can get my robot's local xy position when I click on the map, but not sure how to transform it to the lat/lon coordinate.

Rviz end effector position commands not reflecting the right coordinates

$
0
0
Hello, I am working with ROS Kinetic on Ubuntu 16.04, and I am trying to send end effector poses to my Staubli TX90 using move group commands. I send an [x,y,z,w] coordinate through the setPositionTarget function, and the movement in Rviz does not reflect the desired movement. In my program, I try to create circular motion by changing x, y and keeping z constant, and output the values i send through the function to my console, and the coordinates I send to the function show a circular path, but the visualization in Rviz shows different movement and movement along the z axis. When i click the dropdown for the end effector link I am controlling, it shows different coordinates than what I output to console. I believe there may be a problem with the transforms, or the kinematic solver, but I am not sure. Anything commented out was attempting cartesian path planning, which also does not work due to the discrepancy of the coordinate system Code: int main(int argc, char** argv) { ros::init(argc, argv, "mover_node"); ros::NodeHandle node_handle; ros::AsyncSpinner spinner(1); spinner.start(); bool success; //std::vector waypoints; // Setup static const std::string PLANNING_GROUP = "arm"; moveit::planning_interface::MoveGroupInterface move_group(PLANNING_GROUP); moveit::planning_interface::PlanningSceneInterface planning_scene_interface; const robot_state::JointModelGroup* joint_model_group = move_group.getCurrentState()->getJointModelGroup(PLANNING_GROUP); // Visualization namespace rvt = rviz_visual_tools; moveit_visual_tools::MoveItVisualTools visual_tools("staubli_tx90_robot_link_6"); visual_tools.deleteAllMarkers(); visual_tools.loadRemoteControl(); Eigen::Affine3d text_pose = Eigen::Affine3d::Identity(); text_pose.translation().z() = 1.75; visual_tools.publishText(text_pose, "MoveGroupTX90 Demo", rvt::WHITE, rvt::XLARGE); visual_tools.trigger(); // Getting information ROS_INFO_NAMED("tutorial", "Reference frame: %s", move_group.getPlanningFrame().c_str()); ROS_INFO_NAMED("tutorial", "End effector link: %s", move_group.getEndEffectorLink().c_str()); // Start Demo visual_tools.prompt("Press 'next' in the RvizVisualToolsGui window to start the demo"); move_group.setNamedTarget("candle"); // Ask Rviz to visualize and not move moveit::planning_interface::MoveGroupInterface::Plan my_plan; success = (move_group.plan(my_plan) == moveit::planning_interface::MoveItErrorCode::SUCCESS); move_group.execute(my_plan); double rad = 0.0; double x = 0.0; double y = 0.0; double z = 0.0; std::vector rpy; geometry_msgs::PoseStamped current_poseS; geometry_msgs::Pose current_pose; geometry_msgs::Pose target_pose; current_poseS = move_group.getCurrentPose(move_group.getEndEffectorLink().c_str()); rpy = move_group.getCurrentRPY(move_group.getEndEffectorLink().c_str()); cout << "XYZ position: " << "[" << current_poseS.pose.position.x << ", " << current_poseS.pose.position.y << ", " << current_poseS.pose.position.z << ", " << current_poseS.pose.orientation.w << "]" << "\n"; //current_pose.position.x = 0.0469089; //current_pose.position.y = 0.050004; //current_pose.position.z = 1.42799; //current_pose.orientation.w = 0.999996; //waypoints.push_back(current_pose); //target_pose.position.x = current_pose.position.x; //target_pose.position.y = current_pose.position.y; //target_pose.position.z = current_pose.position.z; /*for(int i = 0; i < 10; i++) { target_pose.position.x = target_pose.position.x + .05; target_pose.position.y = target_pose.position.y; target_pose.position.z = target_pose.position.z; waypoints.push_back(target_pose); cout << "XYZ position: " << "[" << target_pose.position.x << ", " << target_pose.position.y << ", " << target_pose.position.z << ", " << target_pose.orientation.w << "]" << "\n"; }*/ for(double rad = 0.0; rad < 3; rad += 0.01) { x = current_pose.pose.position.x * cos(rad); y = current_pose.pose.position.y * sin(rad); move_group.setPositionTarget(x, y, z, move_group.getEndEffectorLink().c_str()); // first we want to deal with xyz. consider RPY constant. move_group.setRPYTarget(rpy[0], rpy[1], rpy[2], move_group.getEndEffectorLink().c_str()); cout << "XYZ position: " << "[" << x << ", " << y << ", " << z << "]" << "\n"; success = (move_group.plan(my_plan) == moveit::planning_interface::MoveItErrorCode::SUCCESS); move_group.execute(my_plan); } //moveit_msgs::RobotTrajectory trajectory; //const double jump_threshold = 0.0; //const double eef_step = 0.01; //double fraction = move_group.computeCartesianPath(waypoints, eef_step, jump_threshold, trajectory); //ROS_INFO_NAMED("tutorial", "Visualizing plan 4 (Cartesian path) (%.2f%% acheived)", fraction * 100.0); //visual_tools.deleteAllMarkers(); //visual_tools.publishText(text_pose, "Joint Space Goal", rvt::WHITE, rvt::XLARGE); // visual_tools.publishPath(waypoints, rvt::LIME_GREEN, rvt::SMALL); //for (std::size_t i = 0; i < waypoints.size(); ++i) // visual_tools.publishAxisLabeled(waypoints[i], "pt" + std::to_string(i), rvt::SMALL); visual_tools.trigger(); visual_tools.prompt("Press 'next' in the RvizVisualToolsGui window to continue the demo"); //success = (move_group.plan(my_plan) == moveit::planning_interface::MoveItErrorCode::SUCCESS); //move_group.execute(my_plan); ros::shutdown(); return 0; }

tf::Transform getOrigin returning different value from rviz

$
0
0
Hi there! I'm trying to see the pose and rotation from a UR5 using tf and comparing with rviz and the robot GUI itself. In rviz I get the same result as the GUI, but when I try to listen to the transform and see the values, and the values are different from expected. #include #include //tf #include #include int main(int argc, char** argcv){ ros::init(argc, argcv, "world_to_robot_pose"); tf::StampedTransform transform; static tf::TransformListener listener; while(ros::ok()){ try{ listener.lookupTransform("/tool0_controller","/world",ros::Time(0), transform); ROS_INFO_STREAM(" Transform: " << transform.getOrigin().x() << ", " << transform.getOrigin().y() << ", " <

How to generate a new map from gmapping ?

$
0
0
Hi all, I want to create a new map according to the data from the map generated by the gmapping but according to the robot position and orientation and other specifications, I'm going to explain exactly what I want. So here's the initial situation : ![image description](/upfiles/15344273522800344.jpg) This new map must respect those rules : - It has to encompass the gmapping map. - It has a much bigger resolution (lets say 3meters) and the current position of the robot must be the coordinate of one a the new cells (meaning the robot has to be aligned on the bottom right of a cell because the new map is generated according to its position). I draw a little example where the grid you see represents all the cells of the new map : ![image description](/upfiles/153442737294644.jpg) - I set the values in `NewOccupancyGrid.data` according to the ones from the gmapping map : With a `for loop` I check all the cells value (from gmapping map) and increment different counters (one counter for each of the corresponding cell of my new map) whenever a cell equal -1 (UNKNOWN) or 100 (OBSTACLE) and if a counter exceeds a value I consider that my new cell is occupied. Here is the code for this part it might help you see what I'm doing : std::vector unknown_counter(tile_map_size); std::vector obstacle_counter(tile_map_size); std::vector free_counter(tile_map_size); std::vector number_cells(tile_map_size); float x_from_map_origin = 0.0; float y_from_map_origin = 0.0; int tile_index = 0; for (int map_row = 0; map_row < map_metadata.width; map_row++) { for (int map_column = 0; map_column < map_metadata.height; map_column++) { number_cells[tile_index] += 1; x_from_map_origin = map_metadata.origin.position.x + map_row * map_metadata.resolution; y_from_map_origin = map_metadata.origin.position.y + map_column * map_metadata.resolution; tile_index = floor((x_from_map_origin - tile_map_metadata.origin.position.x)/tile_length_) + floor((y_from_map_origin - tile_map_metadata.origin.position.y)/tile_length_) * tile_map_metadata.width; if (occupancy_grid_temp.data[map_row + map_column*map_metadata.width] == OCCUPIED_CELL) { obstacle_counter[tile_index] += 1; } else if (occupancy_grid_temp.data[map_row + map_column*map_metadata.width] == UNKNOWN_CELL) { unknown_counter[tile_index] += 1; } else { free_counter[tile_index] += 1; } } } std::vector vect(tile_map_size); tile_map_occupancy_grid.data = vect; tile_map_occupancy_grid.info = tile_map_metadata; for (int tile_map_index = 0; tile_map_index < tile_map_size; tile_map_index++) { //If there is more than 80% of obstacles if (100*obstacle_counter[tile_map_index]/number_cells[tile_map_index] > 80) { tile_map_occupancy_grid.data[tile_map_index] = 100; } //if there's at least 20% of space else if (100*free_counter[tile_map_index]/number_cells[tile_map_index] > 20) { tile_map_occupancy_grid.data[tile_map_index] = 0; } else { tile_map_occupancy_grid.data[tile_map_index] = 50; } } I'm currently able to get to this result because I'm launching my node before the robot moves so the odom and map frames are aligned. My problem is now to get to this result : ![image description](/upfiles/15344274185928607.jpg) If I move the robot before creating the new map (and that will happen), since the orientation has changed it makes all the situation a lot more complex. So I have these questions : - Is it possible when I'm launching my node to change the gmapping orientation (I don't care about position it's only an offset) to have the value of the occupancy grid aligned with my new occupancy grid so that I can use the same algorithm ? - If not, is it possible to reset the gmapping (or launch a new one) when I'm launching my node to be like the initial situation ? - Or should I try to get the transform matrix and apply it to each cells of the gmapping map to get the values aligned to my new map ? I hope you understand my problem and I'm open for suggestions if you have a hint on how to solve it. Thank you in advance

Denavit-Hartenberg (DH) to URDF/SDF or /tf /tf2?

$
0
0
Hi! Can anybody advise, is there any package to build a manipulator using Dehavit-Hartenberg parameters, then convert to URDF/SDF or just use outputs as /tf /tf2 ?

How to use tf for custom transformation

$
0
0
Hi! I'm searching for a way to do the following: 1. create a temporary frame (parent known frame, let's just call it parent frame), 3. generate some points, 4. get the transformation from the temporary frame to the parent frame 1. apply the transformation to all generated points. To create the temporary frame I have yaw, pitch and roll. The offset is zero. I could not find documentation on how to generate the frame on the fly. I know about the StaticTransformBroadcaster but I will generate hundreds of frames. They will never be used outside of the method where they are generated. This method is in the critical path of the code so broadcasting the frame seems a heavy overkill. So far I've only been able to use tf2 methods for matrix multiplication after I generate the transform myself. However, there are cases where I do need to switch coordinate frame to calculate properly. Is there a way to generate the temporary frame locally?

How to stop gazebo publishing tf

$
0
0
Hi, all! Now I'm testing the node /robot_pose_ekf with gazebo simulator. And I have already let /gazebo publish IMU and odometry data to /robot_pose_ekf. Also, /robot_pose_ekf is connected to tf. However, when I checked the tf tree I found that the transform from odom -> base_footprint was provided by /gazebo but I want it to be /robot_pose_ekf. I guess it is because both /gazebo and /robot_pose_ekf are publishing /tf. Hence, I think stopping /gazebo publishing /tf can be a possible solution. FYI, I'm using GMapping which subscribes to /tf to provide SLAM functions. Can anyone tell me how to stop gazebo publishing tf? Or any other solutions to my problem? Thank you in advance!

Introduction to tf tutorial: Second turtle does not follow the first one. (No errors)

$
0
0
I've been following http://wiki.ros.org/tf/Tutorials/Introduction%20to%20tf - I started with installing everything with the following command: $ sudo apt-get install ros-kinetic-ros-tutorials ros-kinetic-geometry-tutorials ros-kinetic-rviz ros-kinetic-rosbash ros-kinetic-rqt-tf-tree Luckily everything was already installed and I got this output: Reading package lists... Done Building dependency tree Reading state information... Done ros-kinetic-geometry-tutorials is already the newest version (0.2.2-0xenial-20180809-184418-0800). ros-kinetic-ros-tutorials is already the newest version (0.7.1-0xenial-20180809-153903-0800). ros-kinetic-rosbash is already the newest version (1.14.3-0xenial-20180809-134025-0800). ros-kinetic-rqt-tf-tree is already the newest version (0.5.8-0xenial-20180809-162041-0800). ros-kinetic-rviz is already the newest version (1.12.16-0xenial-20180809-192423-0800). The following package was automatically installed and is no longer required: python-libxml2 Use 'sudo apt autoremove' to remove it. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Now, when I launch `roslaunch turtle_tf turtle_tf_demo.launch` - I get no errors or warnings: ... logging to /home/snowden/.ros/log/3f67ef12-a810-11e8-bcd4-8ca982221578/roslaunch-snowden-Inspiron-N5010-17760.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://snowden-Inspiron-N5010:41377/ SUMMARY ======== PARAMETERS * /rosdistro: kinetic * /rosversion: 1.12.13 * /scale_angular: 2.0 * /scale_linear: 2.0 * /turtle1_tf_broadcaster/turtle: turtle1 * /turtle2_tf_broadcaster/turtle: turtle2 NODES / sim (turtlesim/turtlesim_node) teleop (turtlesim/turtle_teleop_key) turtle1_tf_broadcaster (turtle_tf/turtle_tf_broadcaster.py) turtle2_tf_broadcaster (turtle_tf/turtle_tf_broadcaster.py) turtle_pointer (turtle_tf/turtle_tf_listener.py) ROS_MASTER_URI=http://localhost:11311 process[sim-1]: started with pid [17777] process[teleop-2]: started with pid [17778] process[turtle1_tf_broadcaster-3]: started with pid [17779] process[turtle2_tf_broadcaster-4]: started with pid [17786] process[turtle_pointer-5]: started with pid [17795] Reading from keyboard --------------------------- Use arrow keys to move the turtle. After a brief moment, the second turtle revolves a little bit: ![C:\fakepath\turtle.gif](/upfiles/15351936242633419.gif) According to the tutorial, if I start moving the first turtle, the second turtle should follow it, but as visible in the GIF, nothing happens, it just stays as is. FWIW, my `turtle_tf_demo.launch` file looks like this: I'm running ROS Kinetic on Xubuntu 16.04 LTS (xenial). Here is the output of `env | grep 'ROS'`: ROS_DISTRO=kinetic ROS_ETC_DIR=/opt/ros/kinetic/etc/ros ROS_PACKAGE_PATH=/home/snowden/catkin_ws/src/beginner_tutorials:/home/snowden/catkin_ws/src/husky_highlevel_controller:/home/snowden/catkin_ws/src/ros_best_practices/ros_package_template:/home/snowden/catkin_ws/src/teleop_twist_keyboard:/opt/ros/kinetic/share ROS_VERSION=1 ROS_ROOT=/opt/ros/kinetic/share/ros ROS_MASTER_URI=http://localhost:11311 ROSLISP_PACKAGE_DIRECTORIES=/home/snowden/catkin_ws/devel/share/common-lisp Also, I'd be around if you need any more info. Is there something wrong on my side? Thanks in advance! :-)

Best/simplest way to implement TF for localization application.

$
0
0
Hello everyone! I'm still a beginner user of ROS, so I apologize in advance if this is too much of a simple question. I'm working with a simple mobile robot powered by ROS for a localization application. Basically, I have two sensors: one that gives the 2D position **L'** of a known landmark on the **robot's reference frame**, and another one that gives the **robot's heading on the world frame**. The goal is to calculate the robot's real position **X** on the **world frame** using: **L**, the known landmark's position on the world frame, **L'**, the landmark's position on the robot's frame as given by the first sensor, and **phi**, the robot's heading given by the second sensor. I already do this by making use of rotation matrices, but since this code might be used by other students after me, I want to know if there's a simple way to implement this with TF, since I have never used TF before. Thanks a lot in advance! PS.: I'm using ROS Melodic Morenia and programming on Python.
Viewing all 844 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>