-
-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UDP egress isn't properly masqueraded with Glorytun TCP #3796
Comments
I'm not able to reproduce the issue. |
I ran into similar performance issues with Wireguard with OMR in the mix as well. I wasn't able to track them down to the same root cause, but then again, I have a secondary NAT boundary past OMR. I did put up a discussion and also the results of further digging. The gist is I now use the Xray proxy for both outbound and inbound functionality, it seems to handle Wireguard much better than Shadowsocks. |
@AKHwyJunkie Wireguard is UDP only, Shadowsocks* are TCP only (or UDP not aggregated). *Ray can do UDP over TCP. Here it's a masquerade issue, but I'm not able to reproduce it for now. |
I agree, it's reported as a MASQ issue. But, the perceived impact of the issue was performance (10mbps) and thus why I wanted to put up my experience relating to Wireguard and OMR. |
I tried Xray as @AKHwyJunkie suggested, but I'm not seeing any improvements unfortunately. Strangely, iperf3-generated UDP streams can saturate my combined uplink (1300 Mbps) without issue. There's some packet loss, but no reordering. The moment I involve Wireguard though, bandwidth tanks. It's not even 10 Mbps now, it's under 1 Mbps. And sure enough, tcpdump sees UDP packets leaving the VPS public interface, addressed to the Wireguard client, with a source IP of 10.255.255.2 - which is the Glorytun VPN's "internal" end. I guess it makes sense that the router has to masquerade traffic sent out over the VPN tunnel, because the VPS has no route for the LAN. But then the VPS fails to masquerade these packets again on the public egress - at least some of them. The connection is not completely blocked, some packets do get the proper NAT treatment, that's where the 1 Mbps of WG throughput comes from. But so many of them are sent out without SNAT, that any TCP connection going through that Wireguard tunnel just throttles itself down to 1 Mbps, even though it could theoretically do as much as 1200. The iperf traffic doesn't seem to use the tunnel though, it just shows up as input on the router and output on the VPS - so I guess it's getting proxied. WG is not getting proxied, it goes through the VPN tunnel. |
The Wireguard "server" in the LAN uses 51903 as the listen port, iperf3 uses 5201. I don't know if there's a difference between the handling of IANA registered ports and ephemeral ports, that's like a last straw I can think of right now. Update: Nope, changing the Wireguard port to 5201 didn't help either. Maybe iperf3 punches a NAT hole somehow, which creates a UDP association in the proxy, and WG doesn't do that... |
By the way, I'm not trying to circumvent any blocking, I'm doing this purely for aggregation - and I'm pushing the limits of contemporary hardware (as long as encryption and encapsulation are both involved) by combining a 2000/1000 GPON with a 1000/300 GPON. Being able to run the Wireguard entry point on the VPS, without getting its port sealed every time the router does something over the OMR API, would technically be a superior solution for me. I could probably even find the piece of code that reconfigures iptables and change it locally. But a 10.0.0.0/8 source IP on a public interface looks like a bigger issue, which I think is worthy of OMR dev attention. |
I assume you've validated that this same issue does not happen without OMR in the mix? One thing to worth checking is the MTU/MSS used as this could be degrading performance and/or resulting in a packet size too large for OMR to deal with. (I've never tested jumbo packets with OMR, but I'd assume the defaults don't handle it correctly.) The typical packet size (MSS) for the original packet going into a tunnel is 1420 bytes (IPv4) and 1380 bytes (IPv6). This allows the original packet to be encapsulated (by Wireguard) and then encapsulated again for transmission to the VPS, with both encaps resulting in 1500 byte packets. |
Expected Behavior
With the proxy set to Shadowsocks-rust-2022 and VPN set to Glorytun-TCP, any LAN-side UDP egress packets are expected to leave the VPS upstream interface towards the public internet with the VPS public IP as the source address, ensuring proper response routing. See tcpdump snippet below:
Current Behavior
When a LAN-side client transmits UDP packets at a rate above 10 Mb/s (value seems arbitrary), the VPS public egress will erratically leave some of the packets un-masqueraded, resulting in a middlebox dropping them somewhere along the path. The TCP egress bandwidth is 1200Mbit/s in my setup, but UDP caps out at 10Mbit/s due to this.
Possible Solution
Probably some VPS-side misconfiguration of the firewall or the routing, I can't really debug that
Steps to Reproduce the Problem
Context (Environment)
I'm trying to run a Wireguard server inside my home for roaming clients to connect in. Ideally, I'd do the server at the VPS, but the OMR daemon just kills every modification I make to its Shorewall config, so I'm instead trying to tunnel the WG transport into the home. The reordering from multipath UDP kills TCP throughput, that's why I went for Glorytun-TCP.
Specifications
The text was updated successfully, but these errors were encountered: