As the title says, I have a frame, which comes by default in the Kinect, and then I calculated a 4x4 transformation matrix. How can I apply it to the existing frame?
Thanks in advance.
↧
Apply transformation matrix to existing frame
↧
tf: Adding a frame to transforms from the past
Hi everyone,
I have a question regarding the usage of tf and hope someone can help me answer it.
The problem setup is the following:
Imaging I have a robot with defined base_frame and I send the robots pose transform (in world frame) to tf with a constant rate. Now the robot has a camera rigidly mounted on it. So I define a camera frame with the robots base frame as parent frame and I also send this frame to tf. This way, I can know the cameras pose in the world frame if the robot moves. Assume at time t_0, the robot is at location s_0 and I get an image from the camera, then I do some processing to calculate the pose of a marker in the camera frame. Assume I finish processing the image at time t_K, where the robot moved to s_k. And now I define a marker frame with its parent frame being the camera frame. For this, I first define for the marker its pose transform in the camera frame, and then I send the transform to tf using the API call:
> tf::TransformBroadcaster br;> br.sentTransform(tf::StampedTransform(markerTransform, t_0, "camera_frame_tf", "marker_frame_tf"));
I stamp this marker frame tranform with time t_0 instead of the current time t_k, because the image is taken at time t_0 and the calculated distance from marker to camera should be superposed with the robots/camera position at t_0. So using the above br.sendTransform callback, will the marker frame be superposed correctly to tf from time t_0?
Also at current time t_k, how do I get the markers pose w.r.t. the world frame back at time t_0?
Any clue will be appreciated!
Many thanks in advance.
↧
↧
turtlebot nav subscriber.... transform_tolerance does no good
I'm on jade, running the turtlebot and amcl stack for 2dnav. I have a custom launch file which brings together the minimal.launch and amcl and to publish depth-image data so I can use cmvision_3d (which is working in jade).
I have problems with timeouts. When I bring up the launch file below, usually I get few or no warnings and can set goals with rviz with no problem. But when I bring up the cmvision_3d as a subscriber, I immediately get timeouts.
[ WARN] [1524237616.463431029]: Costmap2DROS transform timeout. Current time: 1524237616.4634, global_pose stamp: 1524237615.9101, tolerance: 0.5000
[ WARN] [1524237616.463480287]: Could not get robot pose, cancelling reconfiguration
[ WARN] [1524237617.470796802]: Costmap2DROS transform timeout. Current time: 1524237617.4708, global_pose stamp: 1524237615.9101, tolerance: 0.5000
which lead to
[ WARN] [1524238846.987018737]: Could not get robot pose, cancelling reconfiguration
[ERROR] [1524238847.987234328]: Extrapolation Error looking up robot pose: Lookup would require extrapolation into the past. Requested time 1524238808.323638972 but the earliest data is at time 1524238837.977431455, when looking up transform from frame [base_footprint] to frame [map]
So upon reading, I decided to set the transform_timeout to a longer time.
In the turtlebot_navigation/launch/includes/amcl.launch.xml This is set to 1.0 with this note:
This setting seems to be ignored. I tried to modify it and the costmap specifically and it didn't matter. I finally edited the local and global_costmap_params.yaml directly.
Upon testing it, it doesn't work. Or rather, the transform_tolerance is set finally, but the error still occurs. When I start the nav stack, all is well. When I set the cmvision_3d subscriber, I get the same errors. I've tested this all the way up to the unrealistic values of 5.0.
So... Is this a hardware problem?
Or, what else might be happening?
Launch file below.
And the cmvision_3d startup is:
roslaunch cmvision_3d color_tracker.launch
↧
turtle_tf_message_filter msgcallback function is never called
Hi, I'm a green hand.And learning tf, but I meet a trouble. Please help me!
I am I'm following from this page [tf/Tutorials/Stamped datatypes with tf::MessageFilter](http://wiki.ros.org/tf/Tutorials/Using%20Stamped%20datatypes%20with%20tf::MessageFilter) ,but meet the msgcallback never called. Here is my code:
----------
#include "ros/ros.h"
#include "tf/transform_listener.h"
#include "tf/message_filter.h"
#include "message_filters/subscriber.h"
class PoseDrawer
{
public:
PoseDrawer() : tf_() , target_frame_("turtle1")
{
ROS_INFO("PoseDrawer construct starting!");
point_sub_.subscribe(nh_, "turtle_point_stamped", 10);
tf_filter_ = new tf::MessageFilter(point_sub_, tf_, target_frame_, 10);
try
{
ROS_INFO("try starting!");
//The wrong, msgCallback is never called!!!! And others ware OK!
tf_filter_->registerCallback( boost::bind(&PoseDrawer::msgCallback, this, _1));
ROS_INFO("try ending!");
}
catch(tf::TransformException &ex)
{
ROS_ERROR("%s",ex.what());
}
ROS_INFO("PoseDrawer construct end");
};
virtual ~PoseDrawer()
{
ROS_INFO("PoseDrawer destory starting!");
};
private:
message_filters::Subscriber point_sub_;
tf::TransformListener tf_;
tf::MessageFilter * tf_filter_;
ros::NodeHandle nh_;
std::string target_frame_;
//Callback to register with tf::MessageFilter to
// be called when transforms are available
void msgCallback(const boost::shared_ptr& point_ptr)
{
ROS_INFO("msgCallback is listening!");
geometry_msgs::PointStamped point_out;
try
{
tf_.transformPoint(target_frame_, *point_ptr, point_out);
printf("point of turtle 3 in frame of turtle 1 Position(x:%f y:%f z:%f)\n",
point_out.point.x,
point_out.point.y,
point_out.point.z);
}
catch (tf::TransformException &ex)
{
printf("Failure %s\n", ex.what());
}
};
int main(int argc, char ** argv)
{
ros::init(argc, argv, "pose_drawer"); //Init ROS
PoseDrawer pd; //Construct class
ros::spin(); // Run until interupted
};
};
First,launch a launch file when I finished in [Writing a tf broadcaster (C++)](http://wiki.ros.org/tf/Tutorials/Writing%20a%20tf%20broadcaster%20%28C%2B%2B%29) ,[Writing a tf listener (C++)](http://wiki.ros.org/tf/Tutorials/Writing%20a%20tf%20listener%20%28C%2B%2B%29), [tf/TutorialsAdding a frame (C++)](http://wiki.ros.org/tf/Tutorials/Writing%20a%20tf%20broadcaster%20%28C%2B%2B%29)
$ roslaunch learning_tf start_demo.launch
Then ,run the file , it call me that:
$ rosrun learning_tf turtle_tf_message_filte
[ INFO] [1522254522.667447695]: PoseDrawer construct starting!
[ INFO] [1522254522.669954462]: try starting!
[ INFO] [1522254522.669980229]: try ending!
[ INFO] [1522254522.669992621]: PoseDrawer construct end
And I am sure msgCallback function is never called! But I don't know how to solve this trouble.
Please help me !
↧
Simple tf transform without broadcaster/listener
I have a vision based control application, where a drone is flying around, finds a visual marker, extracts the coordinates of the visual marker in body frame and finally receives a position setpoint in local NED frame.
I need to transform the coordinates of the visual marker from body frame to local NED frame based on the position of the drone in local NED coordinates.
I understand that this is possible with a broadcaster + listener, but I don't want to complicate things with extra nodes, all I need is a simple mathematical transformation. Is there a simple function available to do this?
↧
↧
RTK GPS's for absolute position and Orientation w/ Robot Localization
Hello All,
I have asked a question before about using a RTK GPS + IMU + Wheel Encoder Setup and received good feedback.
Now i ask about a system with 2 RTK GPS units + Wheel Encoder.
The RTK units are mounted on opposite ends of the robot frame. I am able to obtain position and orientation from this setup.
I want to clarify my settings of the frames for the robot_localization package.
I plan on publishing 2 Odometry messages from the RTK system, one for the position, the other for the orientation.
The frame_id in the Odometry message from the RTK system was originally "gps". Since the gps is mounted on the side of the robot, i have defined a static transform from gps->base_link.
1. For the first Odometry message (in which only the linear positions and linear velocities are populated), i changed the frame_id of the message to "map", as well as set the child_frame_id to "gps". In this way, the RTK solution gives me a transform from map to gps, and the defined static transform gives me gps to base-link, therefore i obtain the map-> base-link transform.
2. For the second Odometry message (in which only the yaw is obtained, none of the velocities, and none of linear positions), this is provided relative to true north, which is a world fixed frame. So for that i was thinking having frame_id map, and child_frame_id as base_link, because the yaw described in this message is relative to true north.. This would be published as a separate Odometry message.
3. For the wheel encoder node, i have it publishing Odometry message with frame_id being odom, and child frame id being base_link. All the fields are populated (pose, twist).
And now, i fuse them with one instance of EKF in robot_localization package. From each odometry message, i fuse the following:
1. only xyz position and linear velocity xyz (map->gps + gps->base_link static transform = map->base_link)
2. yaw and only yaw (relative to earth and so map->base_link)
3. linear and angular velocity (odom->base_link)
Does this setup make sense?
Thank you,
Jad
↧
Global Costmap Updating Mysteriously
Hello everyone! I'm experiencing some weird behavior while using costmaps in `move_base` which I hope someone can shed some light on.
Thanks in advance to anyone who bothers with reading this wall of text and formulating a reply. Please let me know if I missed some important detail, and I'll be happy to add more information to this! :)
But first...
## What I'm Trying to Achieve
I want to access costmap data directly, i.e. read the values from the map while knowing how to related them to `\map` coordinates. This is not complicated, and I believe I've achieved it with `tf` (I might have to debug it a bit, but that's beside the point). Seeing as I'm using Python, using the [Costmap2D](http://docs.ros.org/indigo/api/costmap_2d/html/classcostmap__2d_1_1Costmap2D.html) API is out of the question.
As such, I developed...
## My Approach
Given that the costmap is only published once in `/move_base/global_costmap/costmap`, and then updated via `/move_base/global_costmap/costmap_updates`, I've written a small Python node that continuously listens to these, updates the main map and then serves them on a `GetMap` service.
This would allow me to have access to the latest costmap as an `OccupancyGrid` whenever I needed it, which is fine for my purposes. Furthermore, serving the map over a service only on-demand should be substantially easier on bandwidth than constantly publishing all updates.
While testing this with `rviz`, I stumbled upon...
## The Problem
**In short, it seems that the costmap that `rviz` shows is different than the one I'm getting on the `/move_base/global_costmap/costmap` topic.**
I have changed my code to act as a simple passthrough (receiving the costmap on the `/move_base/global_costmap/costmap` topic and publishing it elsewhere) and **restarting my node leads me to see, in `rviz`, an updated map without any messages being published in `/move_base/global_costmap/costmap`** (which is monitored with a `rostopic echo`.
In other words, if I move the robot's 2D pose estimate around, leading it to "pollute" the global costmap with whatever trash it's picking up on its sensors, and then restart my node, this new data will be on the costmap published by **my node**, which at this point accesses **only** the `/move_base/global_costmap/costmap` topic which, according to the `rostopic echo` I'm starting at the beginning of the session, has only a single message being published since the beginning of the session (sequence number `seq` is zero).
What I'm doing is basically:
```
self._global_costmap = rospy.wait_for_message("/move_base/global_costmap/costmap", OccupancyGrid)
```
to get the first costmap, and then
```
rospy.Subscriber("/move_base/global_costmap/costmap_updates", OccupancyGridUpdate, self.costmap_cb)
```
to integrate the updates. **For testing purposes, `self.costmap_cb` is limited to:**
```
self._costmap_pub.publish(self._global_costmap)
```
i.e. it publishes the "initial" map every time an update is received, without touching it at all.
## My Theory
I'm thinking that `rviz` uses the aforementioned Costmap2D API to get updated maps. This way, when I set up a new display pointing to `/move_base/global_costmap/costmap`, it "knows" it's a costmap, and accesses the API to get the latest version instead of the barebones version published in that topic. Somehow, in ways I couldn't figure out on my own, it "knows" which costmap the topic (with a completely different name) refers to, and is somehow messing with my custom node and topic.
## My Set-up:
* Xubuntu 14.04
* ROS Indigo
* Latest versions of packages from repos
↧
Why does AMCL post-date tf (~transform_tolerance)?
A [comment in the code](https://github.com/ros-planning/navigation/blob/jade-devel/amcl/src/amcl_node.cpp#L1249-L1278) says> // We want to send a transform that is good up until a
// tolerance time so that odom can be used
And the [wiki page](http://wiki.ros.org/amcl#Parameters) says
> ~transform_tolerance (double, default: 0.1 seconds)
Time with which to post-date the transform that is published, to indicate that this transform is valid into the future.
I still don't understand why AMCL needs to pretend to be able to see the future.
↧
Visual SLAM and coordinate frames for a mobile robot
[ROS Kinetic, Ubuntu 16.04]
Hello again. I followed tf tutorials, read REP103, REP105 and still having trouble setting coordinate systems for my mapping robot properly.
I get `map->camera_optical` transform from ORB-SLAM2 (version with ROS publishers from `https://gitlab.tubit.tu-berlin.de/breakdowncookie/ORB_SLAM2`). I will use that information to navigate my mobile robot with navigation stack. The robot has Raspberry Pi camera used for visual SLAM (visual odometry and obstacle detection implemented in that package). Robot has wheel encoders and uses diff_drive_controller that publishes/could publish odom->base_link transform.
`move_base`'s local_costmap requires some local reference frame - /odom would be perfect for that purpose. By contrast, global_costmap requires global reference - it would be good to use /map for that.
This is my tf_tree just for tests, I thought everything was OK until I tried to use ORB-SLAM.

Then I had to change tf_tree to following form to allow map->camera_optical transform (camera_optical must not have 2 parents - map and base_camera). Nevermind a lack of correspondence in frame names between 2 pictures, it's just to illustrate the structure.

The problem is - how can I set coordinate frames to make use of /map and /odom at the same time, while having `map->camera_optical` and `odom->base_link` transforms? I have no idea how to "divide" `map->camera_optical` transform into `map->base_link` and `base_link->camera_optical`. Is there any possibility to do that?
I would appreciate any help.
↧
↧
Convention for transforming Range message?
What's the standard convention for transforming a sensor_msgs/Range measurement into the attached robot's base_link frame, given the location of the sensor on the robot. I can plop it into the Z field of a Vector3 and transform accordingly, but I was wondering if there was a more standard way.
↧
Finding map to odom tf
Hi,
So I know my robots position in map frame and I know its position in odom frame. How can I get tf which translates map-odom?
I know package like amcl will give you pose in map and also give map-odom tf. I am making my own localization algorithm and can calculate my position in map, and I have got odom-baselink tf, but haven't figured out how to calculate map-odom tf. I know I can run mathematical equations to calculate it, but is there any function in rospy transformations of even roscpp that might automatically handle it?
Thank you
↧
Range of acceptable positions relative to a link
Hi
I have a Gazebo simulation of a robot arm with a parallel gripper (PhantomX Pincher Arm). Because grasping objects in Gazebo is sadly not really realistic or easy, I want to check if the gripper is in the correct position, so that it *would* pick up the object if it *would* close the fingers.
I can retrieve in my (Python) program the position & orientation of the object I want to pick up (pickMeUp) through the `/gazebo/link_state` topic. This topic also gives me one of the two finger links of the gripper. I now would like to write a condition to check if the finger link is in a acceptable range *relative to the link of the object to pick up* (or vice versa of course). So something like this:
if(pose_gripper_link < pose_pickMeUp_link + distance_in_x-direction_of_pickMeUp and ... (other conditions)):
print("grasping would most likely be successful")
Here's a sketch of the situation:
https://www.dropbox.com/s/ih20pt9h3z6lkh1/IMG_20180509_165739.jpg?dl=0
How could I achieve this?
----------
Note that I can only retrieve the Pose of pickMeUp via the `/gazebo/link_state` topic, as this link doesn't appear in my /tf-tree. I can get the links of the gripper through tf though, and get the transformation between the /world coordinate system and the gripper-link coordinate system with the following code:
listener = tf.TransformListener()
try:
listener.waitForTransform("/gripper_finger_base_link", "/world", rospy.Time(), rospy.Duration(4.0))
(trans, rot) = listener.lookupTransform("/gripper_finger_base_link", "/world", rospy.Time())
except (tf.LookupException, tf.ConnectivityException) as e:
print(e)
But I don't know how I can *apply* this transformation & rotation to the Pose of pickMeUp, so that its Pose would be described in the `/gripper_finger_base_link`'s coordinate system. I'm also not sure if this is of any use for me, as the `/gazebo/link_states` already gives me (one) gripper link relative to the `/world` coordinate system. I could also construct a unit vector (described in the `/world` coordinate system) in the direction of the x-axis of the pickMeUp link, in order to have something like the `distance_in_x-direction_of_pickMeUp` form above. This shouldn't be a big issue, but I don't want to reinvent the wheel if this functionality is already in tf (or somewhere else).
↧
How should I think about static sensor transforms?
I have a drone where the `drone_base` is represented in NWU coordinates. Then, I have a sensor on the right side of it, pointing to the right, represented by frame `range1`. In writing my `static_transform_publisher`, in order to transform from frame `drone_base` to child `range1`, I first translated to the point location of the sensor, then performed the rotation around the Z-axis by -pi/2 to point North to the right. The range message with `range1` frame being published to RViz displays properly. Does that sound correct so far? The transform looks like this:
Here's where I get tripped up. I have four of these range measurements, two facing right and two facing left. I have a node that receives these range measurements as an array, and attempts to place the ranges in a PointCloud2 for re-publishing to RViz. I put the range into the X attribute of a PointStamped, with the same stamp and frame_id. Then I look up the transform and apply it:
auto transform = tfBuffer.lookupTransform("rangeX", "drone_base", ros::Time(0));
tf2::doTransform(inPt, outPt, transform);
(`rangeX` where `X = [0,1,2,3]`)
The points are in the incorrect positions. If I reverse the frame_ids in the lookup to:
auto transform = tfBuffer.lookupTransform("rangeX", "drone_base", ros::Time(0));
everything works fine. Is this the proper way to accomplish a child -> parent transform lookup? Or, what am I doing wrong?
↧
↧
How to connect two huskies in the same tf?
So I have two huskies in gazebo simulation husky1 and husky2. I want to control husky1 with keyboard and husky2 would then follow husky1. I'm launching move_base_mapless.launch file and I'm using this http://wiki.ros.org/navigation/Tutorials/SendingSimpleGoals code to send husky2 to any location in the world but it doesn't stop moving and never reaches its goal. That's probably because husky1 and husky2 are not in the same frame or something. How do I connect them in the same frame using tf so when I move husky2 it knows where it has to go?
here's my launch file for loading two huskies:
move_base.launch file:
↧
TF - Transform world frame update rate should be fastest
Hey there,
I am basically doing a transformation from camera_link to world frame
Here is the message before the transformation in [camera_link] frame:
header:
seq: 0
stamp: 4.723000000
frame_id: camera_link
uid: 0
position:
x: 1.10738
y: -0.16057
z: 0.241941
velocity:
x: -0.00120042
y: 0.0169336
z: 0.240961
diameter: 0.064
diameter_px: 26.401
covariance[]
covariance[0]: 0.000693303
covariance[1]: 0.000586575
covariance[2]: 0.00238198
covariance[3]: 0
covariance[4]: 0
covariance[5]: -0
confidence: 1
And this is the message after it was transformed: [in world]
header:
seq: 0
stamp: 4.723000000
frame_id: world
uid: 0
position:
x: 0.322941
y: -0.21167
z: 1.0397
velocity:
x: 0.321961
y: -0.0341664
z: 2.14828
diameter: 0.064
diameter_px: 26.401
covariance[]
covariance[0]: 0.000693303
covariance[1]: 0.000586575
covariance[2]: 0.00238198
covariance[3]: -4.13646e-13
covariance[4]: 1.5708
covariance[5]: 0
confidence: 1
Now actually the high velocities in the world frame could not be correct ....
I have only doubts on how this errors occur:
1) It is because of the different update rate:
**rosrun tf view_frames**

-> the world frame update rate should always be the fastest????
2) I did not set a seq in the header???
3) **rosrun tf tf_monitor camera_link world**
RESULTS: for camera_link to world
Chain is: camera_link -> camera_mount_link -> camera_footprint -> world
Net delay avg = -0.0952712: max = 0
↧
tf::Transform a covariance Matrix
Hey there,
I simply would like to transform a covariance Matrix from one coordinate frame to another one.
Now what I have is the covariance matrix of a position which simply consists out of 3 diagonal values so basically a vector for the covariance in x, y, and z direction.
[a, 0, 0
Cov_mat = 0, b, 0
0, 0, c]
Moreover I have the rotation matrix between the two coordinate frames I would like to transform (named R)
Now I would like to transform like this:
new_Cov = R*Cov_mat*R^T
**My problem the new covariance matrix is 3x3 and can I also now simply extract the diagonal values of this matrix?**
Cause as a result I also would like to just have 3 covariance values for the major axis ....
One Idea I saw was doing this:
**new_Cov.getRPY(result_cov[0], result_cov[1], result_cov[2]);**
But i do not understand in detail what this method getRPY of a matrix does ....
My full code looks like this now:
tf::Quaternion cov_pos_quat;
cov_pos_quat.setRPY(cov_in[0], cov_in[1], cov_in[2]);
tf::Matrix3x3 rotation_m = tf::Matrix3x3(transform.getRotation());
tf::Matrix3x3 cov_pos = rotation_m*tf::Matrix3x3(cov_pos_quat)*rotation_m.transpose();
cov_pos.getRPY(cov_out[0], cov_out[1], cov_out[2]);
↧
How to remove a TF from a ROS bag?
I have a bag that contains a large TF tree where one TF (base_link->virtual_cam) was calculated based on several TFs and parameters. I would like to replay this bag without this generated TF in order to interactively re-generate it as a play the bag.
Hence the question, how do I remove this TF from the bag? Alternatively, can I play the bag without outputting this TF?
Specs: Ubuntu 12.04, Fuerte
↧
↧
Is a Joint in URDF a frame in tf?
Hi everyone!
I know this might sound silly but I just want to ask simple question to reassure myself. Is a joint in a URDF considered a frame in the TF library?
Thanks!
↧
base_link vs base_footprint
I have read that the navigation stacking within ROS requires the use of base_footprint. What is the difference between base_link and base_footprint? How do I know which I should use?
↧
[rosrun] Couldn't find executable named view_frames below /opt/ros/indigo/share/tf
Hi all
I am unable to execute the following commmand in order to view my tf tree
rosrun tf view_frames
which return the following error:
[rosrun] Couldn't find executable named view_frames below /opt/ros/indigo/share/tf
But I dont know why is it? It can works some month ago.
Thank you
↧