Skip to content

Commit 72f696a

Browse files
authored
Update README.md
1 parent 4c39d13 commit 72f696a

File tree

1 file changed

+25
-17
lines changed

1 file changed

+25
-17
lines changed

README.md

+25-17
Original file line numberDiff line numberDiff line change
@@ -20,34 +20,42 @@ First of all, set up and run the example, as described in the [Getting Started](
2020
documentation.
2121

2222
## Setting Up Your Own Robot
23-
Provided a URDF, you only need to adapt the fusion tracker config. Take a look
24-
at the [dbrt_example](https://git-amd.tuebingen.mpg.de/open-source/dbrt_getting_started/tree/master/dbrt_example)
25-
to get started.
2623

27-
In the fusion tracker config file, you have to map all the joint names to
28-
uncertainty standard deviations for the joint process model and joint
29-
observation models. The [dbrt_example](https://git-amd.tuebingen.mpg.de/open-source/dbrt_getting_started/tree/master/dbrt_example)
30-
package provides a good starting point.
24+
Now you can use the working example as a starting point. To use your own robot, you will need
25+
its URDF, and you will need to modify some launch and config files in [dbrt_example](https://git-amd.tuebingen.mpg.de/open-source/dbrt_getting_started/tree/master/dbrt_example). The launch files
26+
should be self explanatory and easy to adapt. You will need to edit
27+
the file fusion_tracker_gpu.launch (fusion_tracker_cpu.launch) to use
28+
your own robot model, instead of Apollo.
29+
30+
The main work will be to adapt the fusion_tracker_gpu.yaml
31+
(fusion_tracker_cpu.yaml) file to your robot. All the parameters
32+
for the tracking algorithm are specified in this file, and it is robot
33+
specific. You will have to adapt the link and joint names to your robot.
34+
Furthermore, you can specify which joints should be corrected using the
35+
depth images, how aggressively they should be corrected, and whether
36+
you want to estimate an offset between the true camera and the
37+
nominal camera in your robot model.
3138

3239
### URDF Camera Frame
3340

34-
In case your URDF model does not specify a camera link, you have to attach
35-
one to some part of the robot where the camera is mounted. This requires
41+
Our algorithm assumes that the frame of the depth image (specified by
42+
the camera_info topic) exists in your URDF robot model. You can check the camera frame
43+
by running
44+
```bash
45+
rostopic echo /camera/depth/camera_info.
46+
```
47+
If this frame does not exist in your robot URDF, you have to add such a camera frame to the
48+
part of the robot where the camera is mounted. This requires
3649
connecting a camera link through a joint to another link of the robot. Take a
3750
look at [head.urdf.xacro](https://git-amd.tuebingen.mpg.de/open-source/dbrt_getting_started/blob/master/apollo_robot_model/models/head.urdf.xacro#L319) .
3851

39-
The XTION camera link *XTION_RGB* is linked to the link *B_HEAD* through the
52+
The XTION camera link *XTION_RGB* is connected to the link *B_HEAD* through the
4053
joint *XTION_JOINT*. The transformation between the camera and the robot is not
41-
required to be very precise. However, it must be accurate enough to provide
54+
required to be very precise, since our algorithm can estimate an offset.
55+
However, it must be accurate enough to provide
4256
a rough initial pose.
4357

44-
Finally, the camera link name (here XTION_RGB) must match the camera frame
45-
provided by the point cloud topic. To determine the name of the depth camera
46-
frame or the RGB frame if registration is used, run
4758

48-
```bash
49-
rostopic echo /camera/depth/camera_info
50-
```
5159

5260
## How to cite?
5361
```

0 commit comments

Comments
 (0)