Hi,
i'm working with [robot_localization](http://docs.ros.org/kinetic/api/robot_localization/html/index.html). Fusing my imu and odometry is wokring great, but when plugging the output (`/odometry/filtered`) into the `navsat_transform` node, the pose is not updated. Neither x, y, z show any changes and always stay at 0.0
As far as I understand, i am starting the node correctly and am supplying the correct topics: see rqt_graph and launch file (link below).
One thing to note is, that the `child_frame_id` in `/odometry/gps/` is empty. I changed the driver and hardcoded it to base_link, didn't have an effect however.
Bag file, launch file, rqt_graph [here](https://drive.google.com/drive/folders/1S92IvmbTxnTAd8B-XZ2D7VEc09eYxQJa?usp=sharing). (in idle position for about 30 secs)
I appreciate any help, thanks!
↧
no pose update in navsat_transform_node topic
↧
What is odom_frame ever used for?
I understand that odom_frame is only locally accurate, and that the transform from map -> odom -> base_link is what provides accurate position information if it is ever needed. My question is: why even use odom_frame? I know that there are no discrete jumps in the odom frame, and that could be useful for some applications, but the actual location is completely off! Why would you ever need smooth, completely incorrect position information?
↧
↧
What is odom_frame ever used for?
I understand that odom_frame is only locally accurate, and that the transformation map -> odom -> base_link is what provides accurate pose information if it is ever needed. However, map -> odom -> base_link is subject to discrete jumps, so odom -> base_link is what is used when you need smooth data. This makes sense, but why would you ever need smooth, completely incorrect pose information? One possibility that I have thought of is that you never just use odom -> base link, but instead, when you need smooth pose for whatever reason, you use a single map -> odom -> base_link transformation at first, and then from there you ADD the odom -> base_link transformation to the single, static transformation you took earlier, and therefore essentially "reset" the odom from being completely inaccurate to once again being on the mark, after which it slowly diverges again. Is this the norm?
↧
tf rviz camera
Hi, I need to se the position of a ball in rviz using mi camera and tf. The thing is that i have the open cv code but i do not have idea how to create a code for tf that subscribe into my image (from OpenCv) and show it on Rviz. I already saw to much codes about tf publisher and listener, and i still have no idea how to use it.
Please can somebody help me?
↧
TF vs TF2 (lookupTwist vs lookup_transform)
Version: ROS Kinetic (ubuntu 16.04)
I believe that I'm pinched in a narrow use-case crack between TF and TF2 and the differences between lookupTwist (which has duration argument for averaging) and what looks like a bunch of manual calculation to achieve that same result using TF2s lookup_transform, which obviously does not provide such functionality.
I find myself dependent on the averaging, and the averaging functionality, which is really useful ***generally*** for robotics. Some code changes in separate but related nodes have exposed TF's apparent inability to see all TF Static Frames that were established prior to (a given) node's initialization. This inability to see previously create static TF is preventing me from resolving what should be a trivial refactor to support the externally driven changes.... looking to find a solution
Tl;DR / I need solution(s) for at least of ***one*** of the follwing:
- For TF to be able to see all static transforms (including those created before the nodes initializes)
- For TF2 to already have this (very useful) lookuptwist functionality available in RosPy; and i just don't know where it is
- Pseudo code for what is needed to write a bespoke function to achieve this averaging function with TF2in a 3d space.
I see on the [TF2 tutorial page](http://wiki.ros.org/tf2/Tutorials/Writing%20a%20tf2%20listener%20%28C%2B%2B%29) an example for a new publisher to push a new Twist from another frame at the same point in time.
but i were to do this i would need to go get two transforms for the same frame at two points in time and them manually average a new Twist value. the turtlebot example is incomplete for 3d...
Help resolving the issues with either approach appreciated.
↧
↧
Calculating velocity from tf
When listening to transforms on tf, is there a way to ask specifically for the one before the latest one? And when you have received a transform, how do you query it for its timestamp?
I have position information being intermittently broadcast onto tf. Because I don't know exactly when this is going to happen I can't ask for a transform with a specific time delay. I would like to ask for the latest two transforms, find the time delay between them and divide by this to give an estimate of velocity.
Any help would be much appreciated!
James
↧
TF suddenly shifts
Hi! I'm trying to make a map with hector and my problem is about TF because although I use 2D pose estimate, after a short time the axes will move suddenly. My launch file has this part inside:
The node that I use is given by
rosrun tf static_transform_publisher x y z yaw pitch roll frame_id child_frame_id period_in_ms
Where am I doing wrong? Also if I rotate my LIDAR a bit faster, the TF can not follow the movement, so what parameter I have to change in launch file? Thank you all.
↧
How to listen to static transform from robot state publisher using the tf2 TransformListener?
I have set up a robot state publisher with my own robot, and I implemented a listener to obtain the TF message by using tf2_ros's TransformListener. I can obtain the /tf information correctly by using the lookupTransform() and allFramesAsString(). However, I can't figure out a way to obtain the /tf_static (as I believe this is a latched topic which doesn't really get published in the same way as /tf. I read the source code and I just couldn't see how I can extract information still if the "message_subscriber_tf_static_" is not even being triggered.
If using rostopic echo /tf_static, you can definitely see that it has been published but the transform_listener's message_subscriber_tf_static_ has never triggered to call static_subscription_callback().
I am also wondering that, instead of using launch file to setup robot_state_publisher, will implementing it in C++ with calling publishFixedTransforms() in a loop solve the problem? (i.e. implementing out own publisher routine for static tf.)
Note: tf_echo seems to work with robot_state_publisher but not with tf2 transform listener.
Example listener code I am using which is from the tutorial page (I have a static transform between link0 and world which is described in my urdf file.):
tf2_ros::Buffer tfBuffer;
tf2_ros::TransformListener tfListener(tfBuffer,nh_arm,false);
try{
geometry_msgs::TransformStamped transformStamped = tfBuffer.lookupTransform("link0", "world",ros::Time(0));
ROS_INFO_STREAM("link1=\n" << transformStamped.transform);
}
catch (tf2::TransformException &ex) {
ROS_WARN("%s",ex.what());
// ros::Duration(1.0).sleep();
}
My question is how to actually get "static transform" (like fixed joint described in the urdf file). There is no documentation how to get it. There are plenty of example how to get the tf transform but not the static one. The constructor side is just one example where I was trying out different namespace and thread setting. I also tried compiled from source code and made relative changes to see which functions are actually being called.
↧
pcl_ros::transformPointCloud shows Lookup would require extrapolation into the past
I was trying to transform point cloud from camera frame to base_link frame using pcl_ros::transformPointCloud. However, the TF always show the error of Lookup would require extrapolation into the past. I checked carefully to make sure I have everything setup correctly but can't figure out why I have this error.
Could you show me what I did wrong?
Here is my code:
#include <...my stuffs..>
class cloudHandler{
public:
cloudHandler():
{
main_sub = nh.subscribe("pico_flexx/points",1,&cloudHandler::mainCB,this);
rail_plane_pub = nh.advertise("rail_plane",1);
fit_rails_sub = nh.subscribe("rail_plane",1,&cloudHandler::fit_railsCB,this);
}
void mainCB(const sensor_msgs::PointCloud2ConstPtr& input){
pcl::PointCloud::Ptr cloud(new pcl::PointCloud);
pcl::PointCloud::Ptr cloud_projected (new pcl::PointCloud);
pcl::fromROSMsg(*input,*cloud);
cloud_direction_y(cloud);
cloud_direction_z(cloud);
cloud_stat_remove(cloud);
rail_plane(cloud, cloud_projected);
}
void rail_plane(pcl::PointCloud::Ptr input, pcl::PointCloud::Ptr cloud_projected){
pcl::PointCloud::Ptr cloud(new pcl::PointCloud);
*cloud = *input;
pcl::ModelCoefficients::Ptr floor_coefficients(new pcl::ModelCoefficients());
pcl::PointIndices::Ptr floor_indices(new pcl::PointIndices());
pcl::SACSegmentation floor_finder;
floor_finder.setOptimizeCoefficients(true);
floor_finder.setModelType(pcl::SACMODEL_PERPENDICULAR_PLANE);
floor_finder.setMethodType(pcl::SAC_RANSAC);
floor_finder.setMaxIterations(500);
floor_finder.setAxis(Eigen::Vector3f(0,1,0));
floor_finder.setDistanceThreshold(0.07);
floor_finder.setEpsAngle(MYDEG2RAD(5));
floor_finder.setInputCloud(boost::make_shared>(*cloud));
floor_finder.segment(*floor_indices, *floor_coefficients);
if (floor_indices->indices.size() > 3)
{
pcl::PointCloud::Ptr floor_points(new pcl::PointCloud);
pcl::ExtractIndices extractor;
extractor.setInputCloud(boost::make_shared>(*cloud));
extractor.setIndices(floor_indices);
extractor.filter(*floor_points);
extractor.setNegative(true);
extractor.filter(*cloud);
pcl::ProjectInliers proj;
proj.setModelType(pcl::SACMODEL_PLANE);
proj.setInputCloud(floor_points);
proj.setModelCoefficients(floor_coefficients);
proj.filter(*cloud_projected);
floor_points->header.frame_id = input->header.frame_id;
floor_points->header.stamp = input->header.stamp;
sensor_msgs::PointCloud2 msg;
msg.header.stamp = ros::Time().now();
pcl::toROSMsg(*cloud_projected,msg);
rail_plane_pub.publish(msg);
}
}
void fit_railsCB(const sensor_msgs::PointCloud2ConstPtr& input_from_camera_frame){
try{
cam_bl.waitForTransform("base_link","pico_flexx_optical_frame",ros::Time(0),ros::Duration(5));
cam_bl.lookupTransform("base_link","pico_flexx_optical_frame",ros::Time(0),cam_bl_tf);
}
catch(tf::TransformException &ex){
ROS_WARN("%s",ex.what());
};
sensor_msgs::PointCloud2 input_in_bl;
pcl_ros::transformPointCloud("base_link",*input_from_camera_frame,input_in_bl,cam_bl);
}
private:
tf::TransformListener cam_bl;
ros::NodeHandle nh;
ros::Subscriber main_sub, fit_rails_sub;
ros::Publisher rail_pose_pub;
};
int main(int argc, char **argv){
ros::init(argc, argv, "pico_rails_node");
cloudHandler handler;
ros::spin();
return 0;
}
And the error message:
`[ERROR] [1518908418.439292642, 1518732053.305808327]: Lookup would require extrapolation into the past. Requested time 1518732053.118709000 but the earliest data is at time 1518732053.201569502, when looking up transform from frame [pico_flexx_optical_frame] to frame [base_link]`
A static tf between base_link -> pico_flexx_link is called in a launch file. pico_flexx_link -> pico_flexx_optical_frame is provided by the camera's driver.
↧
↧
robot_localization, odom topics frame_ids and tfs
I am trying to use ekf_localization_node from robot_localization to create a odom -> base_footprint tf transform. For this I need other nodes (that capture raw odometry) to not publish this tf transform. But from these nodes I need various topics of types Odometry, Pose etc. And data in these topics must have a frame_id associated with them. The nodes providing these topics (gazebo right now, but plan to use turtlebot encoders, turtlebot gyroscope and visual odometry) set frame_id to odom there and apparently make some assumptions about the odom frame.
Is this a correct setup?
Do I need to set some other frame_id there?
What are best practices for this?
Won't these nodes mess up data when the odom transform is published by ekf_localization but not themselves?
Right now, my filtered odometry jumps around like crazy, while odometry from gazebo on which it is based is close to perfect.
↧
frame name when launching static tf publisher
Dear all,
For example, when processing rgb and depth images, if I want to run tf publisher to finish transformation between different images, how does tf know the frame names of different images. Does tf publisher follow the frame_id of header in sensor_msgs::image? I am confused when I launch static tf publisher for two frames, how does it relate frame name with certain image or robot link or anything else?
Thanks for any help.
↧
tf between pointcloud and world frame
Dear all,
Instead of using freenect_launch package, I published all depth_image_proc required images and infos by myself and successfully generated pointcloud since I can echo specified topic and it did have response. The problem right now is when I want to use pointcloud to generate actomap in moveit, I assume since I lose the tf between pointcloud and world, the message will always drop. But because I don't know what frame name of pointcloud, I don't know what I should write down when I launch tf publisher. Any idea for how to get the frame name of my pointcloud? Thanks in advance for any help.
↧
camera_frame does not exist.
Hi,
I am new in ROS and I'm going through the ROS-Industrial tutorial and I've stopped on exercise 4.0
http://ros-industrial.github.io/industrial_training/_source/session4/Motion-Planning-CPP.html
I get errors like below
[ERROR] [1511855001.563918253]: Exception thrown while processing service call: "camera_frame" passed to lookupTransform argument source_frame does not exist.
[ERROR] [1511855001.564382981]: Service call failed: service [/localize_part] responded with an error: "camera_frame" passed to lookupTransform argument source_frame does not exist.
[ERROR] [1511855001.564592927]: Could not localize part
I verified frames with command:
rosrun tf view_frames
and it is true. There are many frames such as world, base_link, shoulder_link etc. but there is no camera_frame at all.
My question is where I could make a mistake or How to add camera_frame to robot environment.
----------
Update 29th Nov 2017 9:10
I found in [fake_ar_publisher code](https://github.com/Jmeyer1292/fake_ar_publisher/blob/master/src/fake_ar_publisher.cpp) method comment
// Given a marker w/ pose data, publish an RViz visualization
// You'll need to add a "Marker" visualizer in RVIZ AND define
// the "camera_frame" TF frame somewhere to see it.
so I added marker but I can see nothing and I don't know also how to add TF frame.
----------
Update 30th Nov 2017 8:40
I found solution which works for me. Apart from Marker added Yesterday I also added camera_frame broadcaster.
#include
#include
void broadcastCameraPosition(tf::TransformBroadcaster &br,tf::Transform &transform) {
transform.setOrigin(tf::Vector3(0.0, 0.0, 0.0));
transform.setRotation(tf::Quaternion(0, 0, 0, 1));
br.sendTransform(tf::StampedTransform(transform, ros::Time::now(), "world", "camera_frame"));
ROS_INFO("Transform camera_frame sent.");
}
int main(int argc, char **argv) {
ros::init(argc, argv, "camera_frame_node");
ros::NodeHandle nh;
tf::TransformBroadcaster br;
tf::Transform transform;
ros::Rate loop_rate(0.5);
int count = 0;
while (ros::ok()) {
broadcastCameraPosition(br,transform);
ros::spinOnce();
loop_rate.sleep();
}
}
↧
↧
tf error between camera link and robot link
Dear all,
I am using Softkinetic Camera to provide depth and rgb images. There is existing ros package for softkinetic camera and it can generate and publish pointcloud. I use it for moveit to build actomap.
Problem is it always pops out error that `Transform error: Lookup would require extrapolation into the future. Requested time 1519418829.336372746 but the latest data is at time 1519418829.326575994, when looking up transform from frame [link_4] to frame [softkinetic_camera_link]` The setting of update rate of different parts is as following. The publishing rate of pointcloud is 25hz. The update rate of tf between robot states is 50hz and the update rate of tf between camera and world frame is 50hz. I tried a lot of settings but problem still remains. Any idea to solve this problem?
However actomap still works even this problem exists. But the problem is when I remove the object from the scene, the previous voxel will not disappear. I am not sure whether it is because of tf error.
Thanks in advance for any help.
↧
Another question about extrapolation in the past
Hello all,
That's a question that has been asked multiple times but none of the solution have worked so far. I'm playing data from a rosbag and when trying to query the tf I have this error:
[ERROR] [1519650555.804462800, 1504014242.149115352]: Lookup would require extrapolation into the past. Requested time 1504014237.046037623 but the earliest data is at time 1504014237.201914971, when looking up transform from frame [base_link] to frame [world]
Here is the code I'm using to query it :
ros::Time stamp = scan->header.stamp;
//Read transformation UGLY TRICK :P
tf::TransformListener listener(ros::Duration(50.0));
std::cout << listener.DEFAULT_CACHE_TIME << " " << listener.getCacheLength() << std::endl;
tf::StampedTransform transform;
try {
listener.waitForTransform(odom_frame_, "/base_link", stamp, ros::Duration(5.0));
listener.lookupTransform(odom_frame_, "/base_link", stamp, transform);
} catch (tf::TransformException ex) {
ROS_ERROR("%s",ex.what());
}
As visible in the code, I tried to change the buffer size to some higher number but it seems this is ignored :/ ?
I'm running the rosbag with `clock` and I've set `use_sim_time` to true. I've also tried slowing down the rosbag to try have my algorithm catch up but I still have the same problem. So I'm at a loss on what the actual problem is. Using `ros::Time(0) instead of the haeder time stamp works.
What is wrong in the approach I'm having ?
↧
How to invert coordinate frames of a Transform
I have a fixed transform coming from the [ar_track_alvar](http://wiki.ros.org/ar_track_alvar) pkg, that gives me the markers pose into the camera_link. I want to get the inverse of it and save the exact data but reversed. The problem is that being the marker frame reversed, the inverse gives me wrong result.
How to have x and y markers coordinates to be consistent with the camera_link ones? What is the easiest way to achieve this?
I want to have x pointing forward (red) and y on the left (green).

Thanks for anyone helping me out!
Cheers,
Simone.
↧
Tf tree getting messed up when launching freenect.launch
Just launching the robot description .

After launching freenect
roslaunch freenect_launch freenect.launch

↧
↧
ImportError: dynamic module does not define init function (PyInit__tf2)
I'm running Python 2.7 and when I try to import tf I'm getting the error as mentioned in the question.
This is pretty confusing because according to the [issue](https://github.com/ros/geometry2/issues/259) this is a problem in Python 3 not in Python 2.7.
Any help will be much appreciated, thanks!
↧
setting up a tf for SICK Laser Scanner PLS101-312
Hi,
I am currently working with a 2d laser scanner (PLS101-312). There are no registered drivers available for this device in the ROS repositories. however, following this [answer](http://answers.ros.org/question/61600/sick-scanner-pls200-113/), I was able to find a fork on git [here](https://github.com/hawesie/laser_drivers/tree/master/sicktoolbox_pls_wrapper), which works pretty fine on the device I am working on. I could also see the stream of laser data using `$ rostopic echo /scan`
My question here is, How can I view the laser scan data on rviz? When tried the /scan topic on rviz, I get an error saying that the frame for laser_scanner cannot be /world, the [example here](http://wiki.ros.org/sicktoolbox_wrapper/Tutorials/UsingTheSicklms) shows that it is being set to a base_laser. how can I do the same? I am aware that, there should be a tf for my laser_scanner, what would be the simplest way of setting up a tf for the laser, just for testing the laser scanner output? please suggest, some Ideas.
Thanks again,
Murali
↧
getting the tf between camera and markers
I'm preety new to ROS and python. I'm using the ar_track_alvar pkg and i need to get the tf from camera to marker. Supposeaddly it's given by the package so i thought to create a listener to get it but i have no idea how to do it
↧