You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/getting-started-wsl.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ If you are not using Windows 11 and would prefer not to upgrade, you can follow
39
39
3. Now, let's install Ubuntu.
40
40
- Download the WSL2 kernel from [here](https://docs.microsoft.com/en-us/windows/wsl/wsl2-kernel).
41
41
- Open a PowerShell window and run command `wsl --set-default-version 2` to use WSL2 by default.
42
-
- Install Ubuntu 20.04 LTS, Ubuntu 22.04 LTS or Ubuntu 24.04 LTS from the Microsoft Store.
42
+
- Install Ubuntu 22.04 LTS or Ubuntu 24.04 LTS from the Microsoft Store.
43
43
- Open the Ubuntu app in the Start menu. It will open a command prompt and ask you to create a new UNIX username and password for your WSL2 Ubuntu installation.
Copy file name to clipboardexpand all lines: docs/getting-started.md
+15-9
Original file line number
Diff line number
Diff line change
@@ -68,13 +68,13 @@ These instructions assume you have a basic understanding of Linux and the comman
68
68
69
69
We currently only support Linux, specifically Ubuntu.
70
70
71
-
If you have a X86_64 machine, we support Ubuntu 20.04 LTS, Ubuntu 22.04 LTS and Ubuntu 24.04 LTS.
71
+
If you have a X86_64 machine, we support Ubuntu 22.04 LTS and Ubuntu 24.04 LTS.
72
72
73
73
If you have a ARM64 (also known as AARCH64) machine, we support Ubuntu 24.04 LTS.
74
74
75
75
You are welcome to use a different version or distribution of Linux, but may need to make some tweaks in order for things to work.
76
76
77
-
You can use Ubuntu 20.04 LTS, Ubuntu 22.04 LTS or Ubuntu 24.04 LTS inside Windows through Windows Subsystem for Linux, by following [this guide](./getting-started-wsl.md). **Running and developing Thunderbots on Windows is experimental and not officially supported.**
77
+
You can use Ubuntu 22.04 LTS or Ubuntu 24.04 LTS inside Windows through Windows Subsystem for Linux, by following [this guide](./getting-started-wsl.md). **Running and developing Thunderbots on Windows is experimental and not officially supported.**
78
78
79
79
### Getting the Code
80
80
@@ -230,13 +230,11 @@ Now that you're setup, if you can run it on the command line, you can run it in
230
230
231
231
- If we want to run it with real robots:
232
232
- Open your terminal, `cd` into `Software/src` and run `ifconfig`.
233
-
- Pick the network interface you would like to use:
234
-
1. If you are running things locally, you can pick any interface that is not `lo`
235
-
2. If you would like to communicate with robots on the network, make sure to select the interface that is connected to the same network as the robots.
233
+
- Pick the network interface you would like to use. If you would like to communicate with robots on the network, make sure to select the interface that is connected to the same network as the robots.
236
234
- For example, on a sample machine, the output may look like this:
237
235
238
236
```
239
-
enp0s5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
237
+
wlp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
240
238
...
241
239
[omitted]
242
240
...
@@ -247,22 +245,30 @@ Now that you're setup, if you can run it on the command line, you can run it in
247
245
...
248
246
```
249
247
250
-
- An appropriate interface we could choose is `enp0s5`
248
+
- An appropriate interface we could choose is `wlp3s0`
249
+
- Hint: If you are using a wired connection, the interface will likely start with `e-`. If you are using a WiFi connection, the interface will likely start with `w-`.
251
250
- If we are running the AI as "blue": `./tbots.py run thunderscope_main --interface=[interface_here] --run_blue`
252
251
- If we are running the AI as "yellow": `./tbots.py run thunderscope_main --interface=[interface_here] --run_yellow`
253
252
-`[interface_here]` corresponds to the `ifconfig` interfaces seen in the previous step
254
-
- For instance, a call to run the AI as blue on wifi could be: `./tbots.py run thunderscope_main --interface=enp0s5 --run_blue`
253
+
- For instance, a call to run the AI as blue on WiFi could be: `./tbots.py run thunderscope_main --interface=wlp3s0 --run_blue`. This will start Thunderscope and set up communication with robots over the wifi interface. It will also listen for referee and vision messages on the same interface.
254
+
-**Note: You do not need to include the `--interface=[interface_here]` argument!** You can run Thunderscope without it and use the dynamic configuration widget to set the interfaces for communication to send and receive robot, vision and referee messages.
255
+
- If you choose to include `--interface=[interface_here]` argument, Thunderscope will listen for and send robot messages on this port as well as receive vision and referee messages.
256
+
- Using the dynamic configuration widget is recommended at Robocup. To reduce latencies, it is recommended to connect the robot router to the AI computer via ethernet and use a separate ethernet connection to receive vision and referee messages. In this configuration, Thunderscope will need to bind to two different interfaces, each likely starting with a "e-".
257
+
- If you have specified `--run_blue` or `--run_yellow`, navigate to the "Parameters" widget. In "ai_config" > "ai_control_config" > "network_config", you can set the appropriate interface using the dropdowns for robot, vision and referee message communication.
255
258
- This command will set up robot communication and the Unix full system binary context manager. The Unix full system context manager hooks up our AI, Backend and SensorFusion
256
259
2. Run AI along with Robot Diagnostics:
257
260
- The Mechanical and Electrical sub-teams use Robot Diagnostics to test specific parts of the Robot.
258
261
- If we want to run with one AI and Diagnostics
259
-
-`./tbots.py run thunderscope_main [--run_blue | --run_yellow] --run_diagnostics` will start Thunderscope
262
+
-`./tbots.py run thunderscope_main [--run_blue | --run_yellow] --run_diagnostics --interface=[interface_here]` will start Thunderscope
260
263
- `[--run_blue | --run_yellow]` indicate which FullSystem to run
261
264
- `--run_diagnostics` indicates if diagnostics should be loaded as well
262
265
- Initially, the robots are all connected to the AI and only receive input from it
263
266
- To change the input source for the robot, use the drop-down menu of that robot to change it between None, AI, and Manual
264
267
- None means the robots are receiving no commands
265
268
- More info about Manual control below
269
+
-`--interface=[interface_here]` corresponds to the `ifconfig` interfaces seen in the previous step
270
+
- For instance, a call to run the AI as blue on WiFi could be: `./tbots.py run thunderscope_main --interface=wlp3s0 --run_blue --run_diagnostics`
271
+
- The `--interface` flag is optional. If you do not include it, you can set the interface in the dynamic configuration widget. See above for how to set the interface in the dynamic configuration widget.
266
272
3. Run only Diagnostics
267
273
- To run just Diagnostics
268
274
-`./tbots.py run thunderscope --run_diagnostics --interface <network_interface>`
1. For optimal performance, make sure that any packets sent over the network are below MTU size (1500 bytes). Packets larger than MTU size require multiple transmissions for a single send event and multiple retransmissions in the case of packet loss. Overall, these packets contribute to greater utilization of the network and increased latency.
5
+
2. Connect the host computer to the network via ethernet cable when possible. By minimizing the utilization of the WiFi network, this change significantly improves the round-trip time.
6
+
3. Use unicast communication over WiFi for frequent, low-latency communication over multicast. [RFC 9119](https://www.rfc-editor.org/rfc/rfc9119.html#section-3.1.2) provides a good overview of the limitations of multicast communication over WiFi. In short, routers are forced to transmit at the lowest common data rate of all devices on the network to ensure that all devices receive the packet, meaning that the network is slowed down by the slowest device. In addition, router features such as Multiple Input Multiple Output (MIMO) may not be available when using multicast communication. We have found a 24% improvement in round-trip time when switching from multicast to unicast communication with some benchmarking tests.
7
+
4. On embedded Linux devices, WiFi power management seems to cause significant latency spikes. To disable power management, run the following command: `sudo iw dev {wifi_interface} set power_save off` where `{wifi_interface}` is the name of the WiFi interface (e.g. `wlan0`).
8
+
9
+
10
+
## Debugging
11
+
12
+
We have built some tools to help diagnose network latency problems without the confounding effects of running Thunderloop and Thunderscope and their associated overheads.
13
+
14
+
The latency tester tests the round trip time between two nodes. The primary node sends a message to the secondary node, which sends back the same message as soon as it receives it. The primary node then measures the round trip time.
15
+
16
+
Typically, the primary node is the host computer and the secondary node is the robot.
17
+
18
+
## Running the latency tester with the robot
19
+
### Prerequisites
20
+
You must know:
21
+
- The IP address of the robot. We will refer to this address as `{robot_ip}`.
22
+
- The WiFi interface of the robot. We will refer to this interface as `{robot_wifi_interface}`. This interface is typically found by running `ifconfig` or `ip a` on the robot.
23
+
- The network interface of the host computer. We will refer to this interface as `{host_interface}`. This interface is typically found by running `ifconfig` or `ip a` on the host computer.
2. Copy the binary to the robot: `scp bazel-bin/software/networking/benchmarking_utils/latency_tester_secondary_node robot@{robot_ip}:/home/robot/latency_tester_secondary_node`
27
+
3. SSH into the robot: `ssh robot@{robot_ip}`
28
+
4. There are two test modes: multicast or unicast
29
+
1. For multicast:
30
+
1. Run the latency tester secondary node: `./latency_tester_secondary_node --interface {robot_wifi_interface}`
31
+
- You may optionally also provide the following arguments:
32
+
- `--runtime_dir` to specify the directory where log files are stored
33
+
- `--listen_port` to specify the port on which the secondary node listens for messages.
34
+
- `--send_port` to specify the port on which the secondary node sends messages
35
+
- `--listen_channel` to specify the channel on which the secondary node listens for messages
36
+
- `--send_channel` to specify the channel on which the secondary node sends back replies
37
+
2. On a different terminal on the host computer, run the latency tester primary node: `./tbots.py run latency_tester_primary_node -- --interface {host_interface}`
38
+
- You may optionally also provide the following arguments:
39
+
- `--runtime_dir` to specify the directory where log files are stored
40
+
- `--listen_port` to specify the port on which the primary node listens for replies to messages. This port must match the `--send_port` argument provided to the secondary node.
41
+
- `--send_port` to specify the port on which the primary node sends messages. This port must match the `--listen_port` argument provided to the secondary node.
42
+
- `--listen_channel` to specify the channel on which the primary node listens for replies to messages. This channel must match the `--send_channel` argument provided to the secondary node.
43
+
- `--send_channel` to specify the channel on which the primary node sends messages. This channel must match the `--listen_channel` argument provided to the secondary node.
44
+
- `--num_messages` to specify the number of messages to send
45
+
- `--message_size_bytes` to specify the size of the message payload in bytes
46
+
- `--timeout_duration_ms` to specify the duration in milliseconds to wait for a reply before retransmitting the message
47
+
- `--initial_delay_s` to specify the delay in seconds before sending the first message
48
+
2. For unicast:
49
+
1. Run the latency_tester_secondary_node: `./latency_tester_secondary_node --interface {robot_wifi_interface} --unicast`
50
+
- You may optionally also provide the following arguments:
51
+
- `--runtime_dir` to specify the directory where log files are stored
52
+
- `--listen_port` to specify the port on which the secondary node listens for messages.
53
+
- `--send_port` to specify the port on which the secondary node sends messages
54
+
- `--send_ip` to specify the IP address of the primary node to send replies to
55
+
2. On a different terminal on the host computer, run the latency_tester_primary_node: `./tbots.py run latency_tester_primary_node -- --interface {host_interface} --unicast`
56
+
- You may optionally also provide the following arguments:
57
+
- `--runtime_dir` to specify the directory where log files are stored
58
+
- `--listen_port` to specify the port on which the primary node listens for replies to messages. This port must match the `--send_port` argument provided to the secondary node.
59
+
- `--send_port` to specify the port on which the primary node sends messages. This port must match the `--listen_port` argument provided to the secondary node.
60
+
- `--send_ip` to specify the IP address of the secondary node to send messages to (`{robot_ip}`)
61
+
- `--num_messages` to specify the number of messages to send
62
+
- `--message_size_bytes` to specify the size of the message payload in bytes
63
+
- `--timeout_duration_ms` to specify the duration in milliseconds to wait for a reply before retransmitting the message
64
+
- `--initial_delay_s` to specify the delay in seconds before sending the first message
65
+
3. This tool can also be run with Tracy, a profiling tool, which provides some nice performance visualizations and histograms. To do so:
66
+
1. Make sure Tracy has been installed. Run `./environment_setup/install_tracy.sh` to install Tracy.
67
+
2. On a new terminal in the host computer run Tracy: `./tbots.py run tracy`
68
+
3. When running the latency tester primary node, add the `--tracy` flag to the command before the `--`. For example: `./tbots.py run latency_tester_primary_node --tracy -- --interface {host_interface}`
69
+
4. Tracy will allow you to select the binary to profile and provide detailed performance information after the tester has run.
0 commit comments