Hello everyone!
This is more of a discussion than a question needing a specific answer. I couldn't find a similar question, so here:
I've written a flight simulator with critical performance constraints: I need it to run as fast as possible.
I need to transform various vectors from one frame to another in each time step.
I use `tf::Transform` objects to accomplish this a seen [here](https://github.com/Georacer/last_letter/blob/master/last_letter/include/propulsion/propulsionLib.hpp#L31) and [here](https://github.com/Georacer/last_letter/blob/master/last_letter/src/propulsion/propulsionLib.cpp#L85), for example. I fill the `body_to_mount` fields with the required tf data and use multiplication (as [here](https://github.com/Georacer/last_letter/blob/master/last_letter/src/propulsion/propulsionLib.cpp#L128)) to transform my data.
I do this because I get the feeling that using tf::TransformListener::rotateVector methods often in my code will slow down my loop time, as the method has to reach out the tf tree, parse it and retrieve the data each time.
What do you think? Am I being paranoid?
Thanks in advance!
↧
Best practices for pose/vector transformations
↧
youbot bad results in gmapping
Hey everyone,
I'm having trouble with navigation, the map respectively.
I suspect my gmapping parameters are messed up but I'm not sure as it might also be an odom or tf issue.
Odom also seems messed up as you can see in the [rosbag file](https://www.dropbox.com/s/5ha5jmnkd7j5qts/2015-11-25-16-20-06.bag?dl=0) in which I recorded base_scan, tf and odom.
In that recording the youbot is manually controlled, drives out of a room, scouts the area and drives back into the room through the same door, however the data suggests it reenters the room through the wall right of the door (about a meter).
Any ideas how to tackle this?
This is the launch file I'm using:
↧
↧
tf quaternion explained
hi, im a bit confused about the quaternion convention. If i define a tf::Quaternion (0,1,0,0), what is the direction of the vector? A diagram would be appreciated. Im used to euler convention
↧
Getting the error :"terminate called after throwing an instance of 'tf2::ExtrapolationException' what(): Lookup would require extrapolation into the future. "
I'm trying to convert a laserscan into a point cloud. I have used this code : http://answers.ros.org/question/11232/how-to-turn-laser-scan-to-point-cloud-map/ .
Now, when I play the bag, I run the tf static_transform_publisher and I run the mentioned code, I get this error:
terminate called after throwing an instance of 'tf2::ExtrapolationException'
what(): Lookup would require extrapolation into the future. Requested time 1448633914.965443600 but the latest data is at time 1448633914.954737805, when looking up transform from frame [laser] to frame [base_link]
The bag contains only the scan topic from the laser, the static_transform_publisher publishes the transform between the laser frame and the base frame. What am I doing wrong?
Thanks in advance!
↧
tf + multiple machines + wifi
I'm working on a multi-robot system. Currently, the system runs one roscore on a "mission control" machine. Each robot runs all of its own nodes on its own machine. Bandwidth between machines is limited and latency is variable (the system uses wifi and also 4G + VPN).
The general model here is that data exchange between machines is very minimal and relatively infrequent. Any cross-machine message should not be time-critical or require high bandwidth.
I am new to TF and I want to understand how it exchanges information. For regular ros nodes, the mapping between the networking layer and the ros layer is very straightforward. A ros pub and subscriber represents a TCP/IP connection. IF the pub and sub are on the same machine, then there is no network traffic.
I am having trouble understanding how tf exchanges information over the network. The descriptions I've found are a bit vague and according to the ros node graph (using rqt_graph), it seems everything just goes to a node called "/tf".
What I would like to accomplish is this:
- Each robot has its own set of local frames, updated continuously, which I would like to avoid transmitting over the network.
- The "mission control" machine maintains a global world frame, and knows where each robot is located (but this can be updated on the order of 1hz or less).
If someone could provide me with some reference info that will help me design this, it would be much appreciated.
What I'm hoping I can do is follow a model similar to the rest of the nodes:
- Make sure tf frames are only accessed on the machine they are generated on.
- Write a slow-update node that simply publishes the robot's position to the "mission control" node at a low rate. My hope is this prevents tf from sending data over the network.
↧
↧
tf tree slam and base link
Hello,
I am trying to use a quadrotor with ROS moveit! The driver node of the quadrotor publishes a tf tree, which contains odom, base, frontcam and bottomcam, where odom is the parent frame of base and base is the parent frame of frontcam and bottom cam. Besides this diver I am running a Slam algorithm, which publishes another tf tree, which contains two frames, map and sensor.
When setting up moveit with a urdf model, I create a virtual link between the map frame from the slam algorithm and the base link of the quadrotor. When I launch the resulting demo.launch I get the following warn messages:
[ WARN] [1448659454.360465559]: Unable to transform object from frame 'ardrone_base_frontcam' to planning frame 'Map' (Could not find a connection between 'Map' and 'ardrone_base_frontcam' because they are not part of the same tree.Tf has two or more unconnected trees.)
[ WARN] [1448659454.360597000]: Unable to transform object from frame 'ardrone_base_link' to planning frame 'Map' (Could not find a connection between 'Map' and 'ardrone_base_link' because they are not part of the same tree.Tf has two or more unconnected trees.)
[ WARN] [1448659454.360687375]: Unable to transform object from frame 'ardrone_base_bottomcam' to planning frame 'Map' (Could not find a connection between 'Map' and 'ardrone_base_bottomcam' because they are not part of the same tree.Tf has two or more unconnected trees.)
[ WARN] [1448659454.360741781]: No transform available between frame 'odom' and planning frame 'Map' (Could not find a connection between 'Map' and 'odom' because they are not part of the same tree.Tf has two or more unconnected trees.)
Apparently I need to publish transforms to connect the trees and have only one. I see the following problem: on one tree odometry is being published from the IMU. I am not interested in this data, I only wish to represent the pose of the urdf model in rviz with the data from the Slam algorithm. I would like the base of the quadrotor to share the pose output from the sensor frame.
Any help is appreciated. Thanks
EDIT1: I cannot provide the tf frames right now because I do not have the quadrotor with me, for now, I can give a brief explanation of this. I am using visual slam Ptam so in one tree I have map and sensor, where map is the fixed frame. In the other tf tree I have odom followed by base and the two children of base are frontcam and bottomcam. In this post ( [http://answers.ros.org/question/216826/robot_localization-tf-tree-structure/] I read that there would have to be a static transform between world and odom. I have found no files for parameter configuration.
↧
rviz not able to visualize laser scan
I'm having trouble with RVIZ being unable to show laser scan point data. I'm not sure if this is an RVIZ problem, a problem in my RVIZ config, or a problem with my setup in general (ie: i'm doing something bad with coordinate frames or something along those lines).
**Background**: I am new to using tf frames within ROS. I decided to learn how to use tf and frame headers to properly tag all my sensor data so I can use provided packages (like for example hector_mapping). I'm going to try and give background to how I arrived at where I am now in case the context helps because I'm really not sure what I did to break it.
Here's how I used to be able to see my laser scan data (it is from a little neato lidar):
1. open rviz
2. add LaserScan
3. Set topic to neato_scan topic
Next I created a node which transforms my neato_laser frame to base_link, and then configured hector_mapping and launched it. It worked well and I was able to visualize both the map and laser scan data after configuring rivz.
At some point (the problem is I'm not sure exactly what changes I made to trigger, if any) RVIZ stopped showing laser scan data.
Here is what I know:
- Hector_mapping is still able to see and transform the laser_scan data correctly. The map generates fine and the robot is able to localize from the laser data + map. To me this indicates the coordinate frame transforms are infact setup correctly.
- I am able to verify there is scan data being generated by doing rostopic echo /neato_scan
- In RVIZ, no matter what I try, I can't get it to visualize the laser scan data.
I've tried with the full config that allowed me to see the map + laser scan data before, and I've also tried reverting to setting all frames (fixed frame, target frame, etc) to neato_laser and only adding the LaserScan to the visualization.
**Present Situation**:
- LaserScan shows status "OK" and under topic it shows xxx messages received which counts up correctly.
- Occasionally a single point will show up for a split second, usually at the origin. If i place a large decay time, it will show several points, mostly at the origin, and some at impossible locations (for example in mid air directly above the origin)
- There is no evidence of any errors. All of RVIZ status fields show OK with green check mark.
ITs quite likely I'll need to provide more info/do some debugging to trace this problem but I'm not sure how to proceed. Any help in how to sort this out is much appreciated.
↧
Tf frame question
i have a '/world' frame and '/sensor' frame. '/sensor' frame is a child frame of the '/world' frame.
Now i want to make a reference vector in the '/sensor' frame ,lets say (1,0,0) which is unit vector x axis (x axis of the sensor coordinate). I define the vector as tf::Quaternion (1,0,0,0) (ignore the rotation atm) in the code.
The question is how can i make sure the vector is inside the /sensor frame? and when i define a vector with x,y,z coordinates, do they follow the rviz axis convention?
↧
TF Transform on a callback
I have two sensors, camera and lidar, that publish the data on their specific frame. I had setup a TF transform for my robot and sensors.
When I read the callback data I have to transform all the sensors data to the same frame.
if (!tf_->waitForTransform(frame_id, camera_frame, msg.header.stamp, ros::Duration(1.0)))
{
ROS_WARN_STREAM ("VISION CALLBACK: Timed out waiting for transform from " << camera_frame << " to "<< frame_id << " at " << ros::Time().toSec());
return;
}
target.header.frame_id = frame_id;
target.header.stamp = msg.header.stamp;
target_in_vision.header.frame_id = camera_frame;
target_in_vision.header.stamp = msg.header.stamp;
target_in_vision.pose.position.x = msg.x;
target_in_vision.pose.position.y = msg.y;
target_in_vision.pose.position.z = 0.0;
target_in_vision.pose.orientation = tf::createQuaternionMsgFromYaw(0.0);
tf_->transformPose(frame_id, target_in_vision, target);
Is it ok to use the msg.header.stamp that have been generated at sensor acquisition or should I use ros::time::now()?
All the code I saw on the internet use ros::time::now() what for me doesn’t make sense because it isn't the same time of sensor acquisition.
BTW on my robot lidar and camera are always on the same position in the robot so have to include the time for tf transform make even less sense.
I read the TF tutorials but I'm still a little confused.
↧
↧
Run tf/tf2 as nodelet
Is it possible to run tf/tf2 as nodelet?
EDIT: What I mean is that, for example, the robot_state_publisher is always a node. Is there a nodelet version? If not, is there any reason or just nobody needed it?
Also: if I publish TF transformations and I do lookup transformations in nodelets, do I get the advantages of using nodelets? I guess that in this case I should be careful that everybody publishing and subscribing to TF are nodelets. However, when using nodelets I should subscribe to ConstPtr and publish Ptr. How do I guarantee this with TF?
Thanks!
↧
(Rviz point cloud from pcd) transformation
Hi!
I am reading in point clouds from *.pcd-files and am visualizing them using RViz. Unfortunately, the point cloud is weirdly oriented:
[Look here (the "right" side of the point cloud is actually the floor)](http://postimg.org/image/h2s0kag1h/)
I believe that this is because ROS uses a different coordinate system than PCL (or at least than how these pointclouds have been recorded). **How do I display my point cloud correctly?**
After reading through some tutorials I think I need to implement a tf-broadcaster that transforms between a "world" frame (ROS coordinate system) and a "pcd" frame (coordinate system used in my files). Then I can set the world-frame as fixed frame in Rviz and as a transformation for the PointCloud2 display my broadcasted transformation. Is that correct? Or is there a more elegant way of getting Rviz to display the point cloud "correctly"?
Thanks!
↧
Problem with Robotino when starting robotino_navigation.launch
Hey there,
I want to navigate through a room with SLAM using gmapping. I already mapped this room in the following way (http://wiki.ros.org/robotino_navigation/Tutorials/Mapping%20with%20Robotino):
roscore
roslaunch robotino_node robotino_node.launch
roslaunch robotino_teleop keyboard_teleop.launch
roslaunch robotino_navigation gmapping.launch
rosrun map_server map_saver -f /tmp/mymap
So I created a nice map of the room, in which I want to navigate through.
As soon as I start
roscore
roslaunch robotino_node robotino_node.launch
roslaunch robotino_teleop keyboard_teleop.launch
roslaunch robotino_navigation navigation.launch map_file:=/tmp/mymap.yaml
as described in http://wiki.ros.org/robotino_navigation/Tutorials/Navigation%20with%20Robotino I get the following Error message in the robotino_node window multiple times:
"No Transform available Error: "map" passed to lookupTransform argument target_frame does not exist."
The rviz interface fires up with my map and I can drive through using keyboard_teleop and he moves in RVIZ.
When I send him a goal with "2D-Nav-Goal" he does not move.
Can anybody help me?
↧
Velocity control from posestamp format of a 6 DOF robot
Hello,
I have a question about a problem which we are facing.
We have a working 6DOF robot where we can place the end-effector by posestamp X,Y,Z. If we enter the position of the end-effector to position x,y,z this works in RVIZ correctly.
I use the simulation of arbotix.
Now we are in reality working with coply motors. These are driven by velocity.
We have a slider where we can set the velocity of the coply motor. But this gives the velocity to the joint.
However, we would like to control the position of the motors in x,y,z. We are using Moveit! which works with the arbotix simulator. For us in the project it is important to give in x,y,z position to the end-effector. However, by some choices we have to deal with cople velocity drivers and not with for example arbotix servo drivers.
Can someone of you tell me how we could convert the x,y,z position in a simple way to velocity to control the coply drivers .
We have seen that via the arbotix sim that when we enter an x,y,z position the joint are velocity driven. So somewhere in the setup we are have an velocity component. I would likt to know where and possible how.
I was looking for Twist or odometry, but I am not sure about this options.
So please help me to find the correct way to position my robot via x,y,z postition and velocity.
Sincerely,
Marcel Weusthof
↧
↧
Orientate orthogonal to point
Hello,
I am having some trouble regarding tf and orientation. I am trying to have one of my manipulators point directly towards another point in space. By this I mean that the new orientation of the manipulator should be such that the given point is on the x-axis of its reference frame.
How would this be accomplished, for example using quaternions? I imagine this could, for example be done by obtaining the Euler angles from the vector in the current reference frame and converting that into a quaternion. But perhaps there is a more elegant solution?
Cheers,
Brick
↧
Robot_localization + IMU + Camera Pose
Hi all,
I'm trying to use this package with this two sensor, an IMU and a Camera that gives back the pose with respect to a marker, using the ar_sys package.
In my opinion, the first problem is with the frame.
I have map_frame(fixed) -> marker_frame (fixed) and than the camera_frame-->imu_frame
It is right that the map_frame is my map_frame, the odom_frame is my marker_frame and the base_link is my imu_frame (which is in the centre of the robot)?
If I understand right, the frame between marker and camera is provided by robot_localization. But if I don't create odom(marker)->base_link(imu) transform the EKF return an error (Could not obtain transform from marker1->imu )
The solution probably is in this sentence:
MAKE SURE something else is generating the odom->base_link transform. Note that this can even be another instance of robot_localization! However, that instance should *not* fuse the global data.
But how is possible that base_link has two parents?
Here the launch file: (fcu is my imu frame)
[true, true, true,
false, false, true,
false, false, false,
false, false, false,
false, false, false] [false, false, false,
true, true, true,
false, false, false,
true, true, true,
true, true, true]
Thx
↧
Tf has two or more unconnected trees
Hi friend. I have a problem. I want to run my bag file with Karto. So I use static transform publisher to connect frames.
My frames.pdf is here: 
I set the fixed frame to the odom, My LaserScan sometimes red, sometimes green:
Like this(Red): 
But it changes.
When I set fixed_frame to the base_link, LaserScan seems good, but at this time, map going to red(Error)sometimes and green sometimes. And it says me "No transform from [map] to [base_link]"
This is an error or normal? Is my TF tree correct?
↧
How to use resulting map, odom frame data from ekf filter of robot_localization package
Hi,
I’m working on a custom built robot and I’m trying to position the robot in known map as well as control it in order to go from A to B. I found robot localization to be exactly what I need for my project but I have some issue with understanding how robot localization works (I read most of forums posts on ROS.org but cannot find explicit answer to my concerns). To be more precise I’m having trouble understanding which node instance output to use for localization in case of two EKF instance example (odom or map)?
I am fusing wheel encoder and IMU in odom frame, and also in another instance of robot localization EKF node I am fusing wheel encoder, IMU and UWB in the map frame. I’m using UWB instead of GPS - I have this with guaranteed 3Hz update rate and it supplies x and y coordinates in world frame. As a result, I have two estimates of the robot’s position in two resulting frames (map and odom). As I understand, the map -> odom transformation is constantly being computed and published and is not constant due to a drift in odom frame, and discrete jumps in map frame. I also understand (from forum posts) that odom frame output is continuous and thus better suited for control. Is there any example of how to integrate robot localization in control loop?
The question is which data to use for localization? Are the two frames in any way connected/communicating in the background (except for transform update)? Due to 3Hz update rate could I get away only with odom frame output (other sensors have 25Hz update rate) which would also include UWB readings?
I would greatly appreciate any help of pointers in the right direction. Thank you in advance.
Best regards,
↧
↧
How can I build a tf which is required by gampping?
I finished base_link → odom tf like this
map
|
odom
|
base_link
But in another required tf transform, it reads
> (the frame attached to incoming scans) → base_link
> usually a fixed value, broadcast periodically by a robot_state_publisher, or a tf static_transform_publisher.
(http://wiki.ros.org/gmapping)
I can not understand how to build tf: scan→ base_link , and depthimagine_to_laserscan(I use kinect) do not
provide any tf Transform.
And tf transforms in Turtlebot like this:
map
|
odom
|
base_link
|
---------camera_rgb_frame-------
| | |
camera_depth_frame camera_link camera_rgb_optical_frame
|
camera_depth_optical_frame
So, how can I combine **scan data** to base_link and broadcast this tf like turtlebot?
Some keywords like "camera_depth_frame", "camera_link"... Broadcasting them direct without any define or export?
And my kinect is fixed on the underpan, does it need a URDF file, or just use static_transform_publisher is fine?
Sincerely
↧
How to set namespace for tf?
I am currently using multimaster_fkie to communicate between two robots.
I can set namespace for all nodes and topics, but I cannot set namespace for the tf trees. The tf frames are conflict with each other.
Is there any solution?
↧
authority unknown_publisher
Hello everyone,
I am working on Ubuntu 14.04 with Gazebo 5 and ROS indigo.
I am just trying to do a robot thaht will be drive with the *libgazebo_ros_diff_drive.so*.
I have created my robot in URDF and it works fine when I am launching it in RViz with this launch file :
But when I try to launch it on Gazebo, I have this kind of error :
> Ignoring transform for child_frame_id "fourche" from authority "unknown_publisher" because of a nan value in the transform (-nan -nan -nan) (0,000000 0,000000 0,000000 1,000000)
My launch file for Gazebo :
Maybe a namespace problem ?
Thanks in advance.
↧