Hi all,
I'm trying to use moveit it to control a turtlebot arm mounted on turtlebot. When I run my moveit.launch file, there's always a transform error:
[ERROR] [1543219008.768123467]: Transform error: Lookup would require extrapolation into the future. Requested time 1543219008.716465048 but the latest data is at time 1543219008.605108938, when looking up transform from frame [wheel_left_link] to frame [camera_rgb_optical_frame]
[ERROR] [1543219008.768212737]: Transform cache was not updated. Self-filtering may fail.
[ERROR] [1543219009.818338395]: Transform error: Lookup would require extrapolation into the future. Requested time 1543219009.718358101 but the latest data is at time 1543219009.712928056, when looking up transform from frame [arm_wrist_flex_servo_link] to frame [camera_rgb_optical_frame]
[ERROR] [1543219009.818410744]: Transform cache was not updated. Self-filtering may fail.
[ERROR] [1543219011.771677221]: Transform error: Lookup would require extrapolation into the future. Requested time 1543219011.720434783 but the latest data is at time 1543219011.650256141, when looking up transform from frame [wheel_left_link] to frame [camera_rgb_optical_frame]
[ERROR] [1543219011.771738074]: Transform cache was not updated. Self-filtering may fail.
I then used tf view_frames to check the rate of each frame is pubshed at, and I found there are few frames are published at a very low rate:



Is this low publishing rate causing the transform error? Why those tf frames are published at different rate while they were published by the same node? How can I fix this?
Many thanks!
↧
how to fix very low frame publish rate of turtlebot
↧
how to define urdf content when robot has rack-gear frame?
I try to make a robot with dynamixel and some kinds of frame which i made.
this robot is some special because I use rack-gear frame and pinion gear in my robot.
Like this one :

I want to use robot_state_publisher and It use kdl_paser using urdf.
My problem is I don't know How I make urdf file.
normally, robot actuators are connect directly using joint or frame. and I already did urdf tutorial, and make some robot arms like open maniulator.
but if world base have rack gear and link1 moves on rack gear, I don't know to define urdf file.
Could I get a solution about this problem?
or get a tutorial or insight?
↧
↧
How to get gazebo world frame in rviz and add to tf tree
Hello,
i am using ROS Kinetic and gazebo version 7.12.0. I am trying to validate how well my robot is moving and my performance specs. I would like to compare the data of the robot's pose with the ground truth data in the gazebo world, for this how can I get the gazebo world frame in the tf tree?.
↧
mismatching topics ar_track_alvar_msgs
Greetings~
While trying to reach position/orientation of detected marker, getting this kind of error:
> [ERROR] [1544117434.771301999]: Client [/vision_node] wants topic /ar_pose_marker to have datatype/md5sum [ar_track_alvar_msgs/AlvarMarker/ef2b6ad42bcb18e16b22fefb5c0fb85f], but our version has [ar_track_alvar_msgs/AlvarMarkers/943fe17bfb0b4ea7890368d0b25ad0ad]. Dropping connection.
I do understand an issue, which is I'm using another structure of msg in vision_node, than ar_track_alvar advertiser.
In this case, how could I get a header on 'pose' of marker detected.
Note: I believe, ar_track_alvar_msgs/AlvarMarkers is done to detect a bunch of markers, but not only one. I only need to detect a single, so it doesn't matter for me, which kind of message to use.
Attaching part of /vision_node code down below:
Localizer(ros::NodeHandle& nh)
{
ar_sub_ = nh.subscribe("ar_pose_marker", 1,&Localizer::visionCallback, this);
server_ = nh.advertiseService("localize_part", &Localizer::localizePart, this);
}
void visionCallback(const ar_track_alvar_msgs::AlvarMarkerConstPtr& msg)
{
last_msg_ = msg;
//ROS_INFO_STREAM(last_msg_->pose.pose);
}
bool localizePart(myworkcell_core::LocalizePart::Request& req,
myworkcell_core::LocalizePart::Response& res)
{
// Read last message
ar_track_alvar_msgs::AlvarMarkerConstPtr p = last_msg_;
if (!p) return false;
tf::Transform cam_to_target;
tf::poseMsgToTF(p->pose.pose, cam_to_target);
tf::StampedTransform req_to_cam;
listener_.lookupTransform(req.base_frame, p->header.frame_id, ros::Time(0), req_to_cam);
tf::Transform req_to_target;
req_to_target = req_to_cam * cam_to_target;
tf::poseTFToMsg(req_to_target, res.pose);
return true;
}
ros::Subscriber ar_sub_;
ar_track_alvar_msgs::AlvarMarkerConstPtr last_msg_;
ros::ServiceServer server_;
tf::TransformListener listener_;
Feel free to clarify, I am bad at explaining:D
↧
tf from rosbag within node
Hello,
i recorded 40 rosbags with my mobile base moving autonomously. I just recorded the tf-topic. If i play the bag from command line i can see the model of my base in rviz. So the bags are just fine. But i want to visualize the driven path. therefor i need to get the coordinates. i do this with a tf-listener. Just fine so far. But i have to start the bags one by one from the terminal. i would prefer to run them within my node. open one bag (index 01), play it, extract coords, save coords to .txt, play next bag (index 02), ....
But it seems like i don't get it. i tried the code at the end. But my problem is, that i don't subscribe to the TF-topic. I just need it for the Listener. So i don't know what to do with the normal code (see comments).
So, is there a way? Or do i have to deal with all the 40 files by hand?
#include "ros/ros.h"
#include "geometry_msgs/PoseStamped.h"
#include "tf/tf.h"
#include "tf/transform_listener.h"
#include "tf/transform_datatypes.h"
#include
#include
#include
#include
using namespace std;
int main(int argc, char **argv)
{
ros::init(argc, argv, "logger");
ros::NodeHandle n;
ros::Rate loop_rate(10);
tf::TransformListener listener;
rosbag::Bag bag;
bag.open("/home/......../door_2/door_01.bag", rosbag::bagmode::Read);
std::vector topics;
topics.push_back("/tf");
rosbag::View view(bag, rosbag::TopicQuery(topics));
while (ros::ok())
{
// normally something like this is used, but i don't subscribe to any topic
BOOST_FOREACH(rosbag::MessageInstance const m, view)
{
if (m.getTopic() == depth_cam_info)
{ }
}
// get coords (works if bag is run from terminal)
geometry_msgs::PoseStamped base_pose, map_pose;
base_pose.header.frame_id = "base_link";
base_pose.pose.orientation.w = 1.0;
listener.transformPose("map", base_pose, map_pose);
// append to file (works if bag is played from terminal)
ofstream myfile;
myfile.open ("/home/lukas/Desktop/coordinates.txt", std::ios_base::app);
myfile << map_pose.pose.position.x << ", " << map_pose.pose.position.y << "\n";
myfile.close();
// just needed
ros::spinOnce();
loop_rate.sleep();
}
}
↧
↧
build octomap using pointcloud2 data
I could use a little help getting Octomap to publish occupancy grid cell data. I have been reading the explanation on the ROS Wiki for how to build an Octomap using the octomap_server node, and PointCloud2 data. I have set up the following:
1 My RealSense camera is publishing data over the topic, "/cloud_in".
2 I am publishing /tf frames for "base_link" to "camera_link" at 5Hz
3 I am running rosrun octomap_server octomap_server_node
What else do I need to do, to get Octomap publishing occupancy grid cells over the topic "/0ccupied_cells_vis_array"
I might be confused on how to set up the tf frames to match the data coming in over the "cloud_in" topic. My understanding is a bit hazy when I come to tf frames and how they are matched with a topic.
↧
Problem getting real-time_map using hector_slam!!!
'Hello all,
I am using Xbox360, Ros Kinetic, Ubuntu 16.04LTS
I am new to ROS. I want to map an unknown enviroment using kinect. For this I am using freenect and gmapping.
I have done these steps:
1. roslaunch freenect_launch freenect.launch
2. rosrun pointcloud_to_laserscan pointcloud_to_laserscan_node cloud_in:=/camera/depth/points
3. rosbag record -O mybag /scan
4. rostopic echo /scan /////////i can see scan data is available here
5. rqt_graph
here is my [MY RQT_GRAPH](https://drive.google.com/open?id=1P10v22SWJYPIo_2HziGoaON2TSl-oPYk) and here is [MY FRAMES.PDF](https://drive.google.com/open?id=1wubudBvtFN2D0_AEz6lsZAdQK5L8GlLA)
This is all great.
But the problem is getting data from the bag file
1. roscore
2. rosparam set use_sim_time true
3. roslaunch hector_slam_launch tutorial.launch scan:=/base_scan
4. rosbag play --clock mybag.bag
5. rqt_graph
here is my [MY RQT_GRAPH](https://drive.google.com/open?id=1kipWQA95MoOC9qUFm06J336YTk2kBlQa) and here is [MY FRAMES.PDF](https://drive.google.com/open?id=1XqM7RxhbN8QMM6QS84NU7Nu2MqCFDoPv)
I can't see any error in RViz. But I can't see the map either.
The terminal where I launch hector_slam shows the error that
[ WARN] [1515990267.492285826]: No transform between frames /map and scanmatcher_frame available after 20.002855 seconds of waiting. This warning only prints once.
[ INFO] [1515990270.173023564]: lookupTransform base_footprint to camera_depth_optical_frame timed out. Could not transform laser scan into base_frame.
5. rostopic pub syscommand std_msgs/String "savegeotiff"
but the terminal in which tutorial.launch is running stated that
[ INFO] [1515995399.174200553]: HectorSM sysMsgCallback, msg contents: savegeotiff
[ INFO] [1515995399.174327483]: HectorSM Map service called
[ INFO] [1515995399.194847114]: GeotiffNode: Map service called successfully
[ INFO] [1515995399.222169236]: Cannot determine map extends!
[ INFO] [1515995399.222222911]: Couldn't set map transform
I HAVE ALSO RUN `rosrun tf view_frames` AND I NOTICED THAT `world` IS NOT CONNECTED TO `base_frame`
can anyone help me with this
please do post a comment if any other specifications required.
Thank you in advance.....
↧
Best point to place base_link in a car-like robot
Hi all,
I am building a car-like robot, and now I was thinking about its tf tree and where I should place the frames. When looking for ROS standards, I found the [TF tutorials](http://wiki.ros.org/navigation/Tutorials/RobotSetup/TF) page, that suggests I should place base_link in the rotational center. But the [REP 105](http://www.ros.org/reps/rep-0105.html) (which the TF page actually references) says that I should attach the base_link in any position or orientation of the base. I am a bit confused because the two pages suggest different things. So, considering that I want to make use of the navigation packages of ROS, what would be the ideal spot for base_link? Gravity center? Geometrical center? Center of the rear axle? Any other ideal place? If you could give me examples of tf trees of car_like robots it would be awesome.
↧
Target pose from pointcloud
Hi, I'm using ROS kinetic and PCL to find a target point. The aim is to use moveit to move to the target point/pose.
I used pcl to create a concave hull and as shown in the code below, I publish the tf frame of the first point in the cloud_hull.
However, I'm lost at trying to get the pose as a geometry_msgs type or valid format for path planning in moveit.
I can visualize the frame and get the coordinates, I want to know how to convert to a PoseStamped or PointStamped so i can use the point as a goal.
pcl::PointCloud::Ptr cloud_hull (new pcl::PointCloud);
pcl::ConcaveHull chull;
chull.setInputCloud (cloud_projected);
chull.setAlpha (0.1);
chull.reconstruct (*cloud_hull);
static tf::TransformBroadcaster br;
tf::Transform part_transform;
part_transform.setOrigin( tf::Vector3(cloud_hull->at(1).x, cloud_hull->at(1).y, cloud_hull->at(1).z) );
tf::Quaternion q;
q.setRPY(0, 0, 0);
part_transform.setRotation(q);
br.sendTransform(tf::StampedTransform(part_transform, ros::Time::now(), world_frame, "Potshell_outline"));
[Here](https://answers.ros.org/question/35084/transform-a-point-from-pointcloud/?answer=35086#post-id-35086) is an answer I was hoping to use, but I'm not sure how to get the header of the cloud_hull.
geometry_msgs::PointStamped pt;
geometry_msgs::PointStamped pt_transformed;
pt.header = myCloud->header;
pt.point.x = myCloud->points[1].x;
pt.point.y = myCloud->points[1].y;
pt.point.z = myCloud->points[1].z;
tf::TransformListener listener;
listener.transformPoint("target_frame", pt, pt_transformed);
if i use `pt.header=cloud_hull->header;` I get no known conversion from `pcl::PCLHeader` to `std_msgs::Header_`
Any help would be highly appreciated. In short, want to get the pose of the cloud_hull point in the cloud hull so i can subscribe to the message and move to the point.
Thanks.
↧
↧
problem in calculating coordinate from PointCloud2
Hi all, this is my callback
void get_point_cloud(const sensor_msgs::PointCloud2 &img) {
sensor_msgs::PointCloud2 img1;
tf::TransformListener listener;
tf::StampedTransform transform;
listener.lookupTransform(img.header.frame_id, "odom", ros::Time(0), transform);
pcl_ros::transformPointCloud("odom", img, img1, listener);
pcl::PointCloud::Ptr cloud(new pcl::PointCloud);
pcl::fromROSMsg(img1, *cloud);
cv::Mat frame;
if (cloud->isOrganized()) {
frame = cv::Mat(cloud->height, cloud->width, CV_8UC3);
for (int h = 0; h < frame.rows; h++) {
for (int w = 0; w < frame.cols; w++) {
pcl::PointXYZRGBA point = cloud->at(w, h);
if (point.y > 0) {
Eigen::Vector3i rgb = point.getRGBVector3i();
frame.at(h, w)[0] = rgb[2];
frame.at(h, w)[1] = rgb[1];
frame.at(h, w)[2] = rgb[0];
} else {
frame.at(h, w)[0] = 0;
frame.at(h, w)[1] = 0;
frame.at(h, w)[2] = 0;
}
}
}
}
cv::imshow("frame", frame);
cv::waitKey(1);
}
I want to use PointCloud2 as laser for my slam package, when I try to convert `img` from `camera_depth_optical_frame` frame to `img1` from `odom` frame and then apply that coordinates on my Map
When I run this code I got
terminate called after throwing an instance of 'tf2::LookupException'
what(): "camera_depth_optical_frame" passed to lookupTransform argument target_frame does not exist.
what is the problem?? I can see `odom` frame in tf tree!!
And my other problem is:
when I don't use tf Transform and try to use only `img` data (from camera_depth_optical_frame), some points has `point.y < 0`!!!!
what does it means?? that points must has `point.y` bigger that 0!!!
↧
Rotate quaternion by body yaw
How to change a quaternion's body-fixed yaw?
Using `Matrix3x3`'s `getRPY` and then `setRPY` doesn't work, because this is about the axes, fixed to the initial orientation, so it's fixed yaw, not body yaw.
Using `getEulerYPR` and then `setEulerYPR` doesn't work as well, because yaw rotation it the first, and it gets affected the following pitch and roll rotation.
I think `getEulerRPY`/`setEulerRPY` might work, but there are no such functions in the `tf` and `tf2` libraries.
↧
Understanding registerCallback() from TF MessageFilter tutorial
I'm going through the TF [MessageFilter tutorial](http://wiki.ros.org/tf/Tutorials/Using%20Stamped%20datatypes%20with%20tf%3A%3AMessageFilter) and I understand the code except for line 13:
tf_filter_->registerCallback( boost::bind(&PoseDrawer::msgCallback, this, _1) );
I'm more of a novice when it comes to using the `boost` libraries, but after reading some documentation, it seems that this line is creating a function object that equates to something of this form: `msgCallback(this, _1)` where `this` is the PoseDrawer object and `_1` is a placeholder that will be substituted with the first argument provided to the function object (which I'm assuming is done by the TF library under the hood).
What I'm confused about is how you know to provide the `this` object and that the callback function needs an `_1` placeholder. Is the `_1` simply because the callback function (shown below) only takes one argument? If my callback function had no parameters, would I just remove the `_1` argument when calling the function? Why is the `this` object necessary?
void msgCallback(const boost::shared_ptr& point_ptr)
↧
Rotate the coordinates of IMU
Hey, I'm using a imu but the problem is that the coordinate frame is wrong. y/roll is pointing to north, x/pitch is pointing to east and z is down. But I have to rotate it 180° to the positive side by roll. I know I have to do something like this:
tf::Transform transform;
transform.setOrigin( tf::Vector3(0.0, 0.0, 0.0) );
transform.setRotation( tf::createQuaternionFromRPY(0,0,M_PI/2) );
tf_broadcast->sendTransform(tf::StampedTransform(transform, ros::Time::now(), "/base_link", "imu_frame"));
But instead of using pi/2 I have to use pi? And it has to be (PI,0,0)?
↧
↧
octomap_server error: "Could not generate key for origin"
I have successfully run octomap_server to create a 3D grid, and visualize it in rviz. I have only been able to do this with octomap's frame_id corresponding to a static tf frame.
When I tried including my custom tf broadcaster, (which generates a transform between the robot and the world from INS position and orientation data), octomap_server continually generates the error: "Could not generate key for origin". I believe this is due to a mismatch in camera depth data, and tf frame time stamps, but I have not been able to find any documentation on this error.
Has anyone else dealt with this error? Or can you see a potential problem in my tf broadcaster code?
#include
#include
#include
#include
tf::Quaternion ins_quat(0, 0, 0, 1); //Initial values for INS transform frame from world
tf::Vector3 ins_vect(0, 0, 0);
void insCallback(const nav_msgs::Odometry::ConstPtr& msg)
{
static tf::TransformBroadcaster broadcaster;
std::cout << "instide insCallback\n";
ins_quat = tf::Quaternion( //Collect the position data
msg->pose.pose.orientation.x,
msg->pose.pose.orientation.y,
msg->pose.pose.orientation.z,
msg->pose.pose.orientation.w
);
ins_vect = tf::Vector3(
msg->pose.pose.position.x,
msg->pose.pose.position.y,
msg->pose.pose.position.z
);
broadcaster.sendTransform(
tf::StampedTransform(
tf::Transform(
ins_quat, ins_vect
),
ros::Time::now(),
"world", "ins_link"
)
);
broadcaster.sendTransform( //TODO Should make this a static tf in the future
tf::StampedTransform(
tf::Transform(
tf::Quaternion(0, 0, -0.707, 0.707), tf::Vector3(0.165, 0.05, 0.345) //static orientation
),
ros::Time::now(),
"ins_link", "camera_link"
)
);
}
int main(int argc, char** argv){
ros::init(argc, argv, "tf_broadcaster");
ros::NodeHandle n;
ros::Subscriber ins_sub = n.subscribe("ins", 1, insCallback);
ros::spin();
}
↧
Is it safe and efficient for several threads to share a single tf::TransformListener object?
I am developing a multi-threaded ROS (kinetic/melodic) node that will rely on tf::TransformListener to wait until it is possible to perform transformations (using waitForTransform(...)), and then perform the transformations (using lookupTransform(...)), if the transform exists.
My question is whether or not it is safe and efficient for all threads to share a single transform listener object:
* Is it safe for several threads to call methods on a single transform listener object simultaneously?
* If a thread calls one of the above-mentioned methods, will any other thread be able to call the same (or another) method on the same transform listener object and run in parallel with the first thread, or will the second thread be blocked until the first thread has finished its call? Having the threads running in parallel, without blocking, is of high importance.
* In case threads would block (then to avoid threads from blocking), would it be okay to have several transform listeners running inside a single node (one listener for each thread inside the node)? My intuition says that the same transformation trees would be built and maintained for each thread, which I think seems very bad from an efficiency point of view.
I apologize for any duplicate posting (I was unable to find the answer in the forum) but would be grateful for any answer you could provide!
↧
rosserial embedded - tf to Publish an Odometry transform cause Segmentation fault
Hi Community,
i'm trying to include odometry to my tiny robot.
So far, rosserial is running on a small linux board and gehts its sensor_msgs::imu from onboard an imu sensor.
Now i am working on odomety (primitive one)... For reference i have used: [Navigatin Tutorial](http://wiki.ros.org/navigation/Tutorials/RobotSetup/Odom) which worked great on my PC with rosbag an rviz.
But when i try to combine rosserial with tf (from the tutorial) on my rosserial node, i get an segmentation fault while:
odom_broadcaster.sendTransform(odom_trans); // Line 55 on the Tutorial page
Changes for rosserial for the tutorial:
from:
ros::Publisher odom_pub = n.advertise("odom", 50);
to:
ros::Publisher odom_pub ("odom", &odom_msg);
nh.advertise(odom_pub);
but what ever i try,
odom_broadcaster.sendTransform(odom_trans);
from tutorial an in libs/ros_lib/tf/transform_broadcaster.h causes an segmentation fault.
my other messages are quiet good, as long as this one line is commented out...
Has anyone an advice for me, or has encountered the same/similar problem?
↧
I am trying to import tf in my python code, but it is throwing error "AttributeError: 'tuple' object has no attribute 'type'". please help me on this issue.
I am using this code to get the tf of the robot with respect to the map, So I am importing tf in my python code. This code was working fine a few days before. All of a sudden I am getting this strange error. I have attached the error below.
cch-student@Braavos:~/mirnew_ws/src/miriam/multi_robot_action_move/scripts$ rosrun multi_robot_action_move robots_pos_publish.py
Traceback (most recent call last):
File "/home/cch-student/mirnew_ws/src/miriam/multi_robot_action_move/scripts/robots_pos_publish.py", line 4, in
import tf
File "/opt/ros/kinetic/lib/python2.7/dist-packages/tf/__init__.py", line 29, in
from listener import Transformer, TransformListener, TransformerROS
File "/opt/ros/kinetic/lib/python2.7/dist-packages/tf/listener.py", line 29, in
import numpy
File "/home/cch-student/.local/lib/python2.7/site-packages/numpy/__init__.py", line 142, in
from . import core
File "/home/cch-student/.local/lib/python2.7/site-packages/numpy/core/__init__.py", line 57, in
from . import numerictypes as nt
File "/home/cch-student/.local/lib/python2.7/site-packages/numpy/core/numerictypes.py", line 111, in
from ._type_aliases import (
File "/home/cch-student/.local/lib/python2.7/site-packages/numpy/core/_type_aliases.py", line 63, in
_concrete_types = {v.type for k, v in _concrete_typeinfo.items()}
File "/home/cch-student/.local/lib/python2.7/site-packages/numpy/core/_type_aliases.py", line 63, in
_concrete_types = {v.type for k, v in _concrete_typeinfo.items()}
AttributeError: 'tuple' object has no attribute 'type'
please help me with this error.
↧
↧
How to remove a TF from a ROS bag?
I have a bag that contains a large TF tree where one TF (base_link->virtual_cam) was calculated based on several TFs and parameters. I would like to replay this bag without this generated TF in order to interactively re-generate it as a play the bag.
Hence the question, how do I remove this TF from the bag? Alternatively, can I play the bag without outputting this TF?
Specs: Ubuntu 12.04, Fuerte
↧
Reading data from the shifted laser, as if it was in the middle
I have algorithms that use raw data from laser [-pi/2;pi/2]. Earlier, the laser was in the middle of the robot. Now it has been moved closer to the left front edge and rotated by pi/4.
What is the best way to transform data from a shifted laser so that it would be visible like the laser was in the middle?
I suppose, I have to use tf, but I don't know how exactly.
I'm using ROS Kinetic.
↧
How to change parent in tf tree
Hi, I am trying to evaluate a SLAM algorithm within gazebo 7.0, ROS Kinetic running on Ubuntu 16.04
This algorithm provides me a transform between the (estimated) camera and the resulting point cloud.
The input source is a cam rigidly attached to a drone for which I gain a TF tree connected to the world frame.
My idea was to:
1) initially provide a fixed transform between the estimated camera link and the actual camera link when the drone is in the starting position. (with no/zero transform and rotation)
resulting in the following (simplified) tree
world->drone->camera/estimated camera->point cloud
2) getting the transform with a listener from world to the point cloud (world->point cloud)
3) inverse the transform between the estimated camera and the point cloud (point cloud->camera)
4) apply the transform (world->point cloud) instead of the one between the camera and estimated camera to get:
world---->drone->camera
|---->point cloud->estimated camera
unfortunately I could not find out how to **remove the old parent** (estimated camera) form the the point cloud and therefore get an circular tf dependency instead.
could somebody give me a hint how to or am I following a completely wrong/unsupported idea? (the final goal would be to compare the error/offset between the two frames of the cameras)
↧