
Tailscale + VPS: The Ultimate Network Setup for Remote AI Coding
Use Tailscale mesh networking with a Netherlands VPS as an Exit Node to solve HuggingFace timeouts, GitHub clone failures, and API connectivity issues. Complete workflow for remote AI development.
Tailscale + VPS: The Ultimate Network Setup for Remote AI Coding
The biggest pain point in AI development from home is not GPU power. It is network connectivity. HuggingFace times out. GitHub clones fail. Conda downloads at 10 KB/s.
I spent weeks figuring this out. If you have a VPS in a well-connected region, combined with Tailscale, you can solve almost all network problems. This is my journey and a complete guide.
The Pain: Network is the Silent Killer
You have experienced these scenarios:
pip install transformershangs on Resolving for minutesgit clonea 5GB model repo, fails overnight- OpenAI API calls timeout constantly
- HuggingFace
.safetensorsdownloads reset mid-way
I have a decent Linux machine at home. Hardware is fine. Network is not. Then I got a Netherlands VPS (cheap) and noticed the same code runs flawlessly there.
The question became: how do I make my home machine "borrow" the Netherlands network?
Solution 1: Tailscale Exit Node (Recommended)
This is my current setup. The principle is simple: make the Netherlands VPS a Tailscale exit node, and route all traffic from the home machine through it.
To the outside world, your home machine appears to be in the Netherlands.
Step 1: Install Tailscale on Both Machines
Go to Tailscale, register an account, then install on both machines:
curl -fsSL https://tailscale.com/install.sh | shLogin:
sudo tailscale upA link will appear for browser authorization. After that, tailscale status should show both machines online.
Step 2: Configure Exit Node on the VPS
On your Netherlands VPS:
# Enable IP forwarding (required)
echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
sudo sysctl -p /etc/sysctl.d/99-tailscale.conf
# Advertise as exit node
sudo tailscale up --advertise-exit-nodeThen go to Tailscale Admin Console, find your VPS, click Edit route settings, and check Use as exit node.
Step 3: Use the Exit Node from Home
Assuming your VPS is named dutch-vps in Tailscale:
sudo tailscale up --exit-node=dutch-vpsDone. All your traffic now routes through the Netherlands.
Verify
curl ipinfo.ioIf the returned IP and region show Netherlands, you are set.
Quick Toggle Aliases
Add these to your .bashrc:
# VPN on (route through VPS)
alias vpn-on='sudo tailscale up --exit-node=dutch-vps'
# VPN off (direct connection)
alias vpn-off='sudo tailscale up --exit-node='Solution 2: HuggingFace Mirror (Fastest Downloads)
If your main issue is downloading large models from HuggingFace, use a mirror instead of a proxy. It is faster.
Add this to your terminal or .bashrc:
export HF_ENDPOINT=https://hf-mirror.comNow huggingface-cli download and the transformers library will use the mirror automatically. Download speeds can max out your bandwidth.
Note: This only solves HuggingFace downloads. It does not help with OpenAI API or GitHub.
Solution 3: Precise Proxying (Proxychains)
If you do not want to route all traffic through the VPS, only specific scripts, use Proxychains.
Assuming you have a local proxy client (like Clash) on port 7890:
Method A: Temporary Environment Variables
export http_proxy=http://127.0.0.1:7890
export https_proxy=http://127.0.0.1:7890
python my_ai_script.pyMethod B: Proxychains Force Takeover
Some libraries (especially those with C++ backends) ignore environment variables. Use Proxychains:
sudo apt install proxychains4Edit /etc/proxychains4.conf, change the last line to:
http 127.0.0.1 7890Then run:
proxychains4 python my_ai_script.pyIs Remote SSH Good for AI Coding?
This is another question I have been exploring. The answer: absolutely yes.
Why Remote Servers Work Better
-
24/7 Uptime: Run AI tasks in tmux on the server. Close your laptop, it keeps running. Wake up to finished code.
-
Environment Isolation: AI-generated code often needs many dependencies. Install on server, not your local system.
-
Network Advantage: Overseas servers have more stable connections to OpenAI, Anthropic APIs.
-
VS Code Remote SSH: Local machine handles only UI. All code execution happens on the server. Feels like local development.
Recommended Tool Stack
| Tool | Purpose |
|---|---|
| Aider | Terminal AI coding assistant, edits files, commits to Git |
| Claude Code | Anthropic official CLI, great for complex tasks |
| tmux | Session persistence, survives disconnects |
| VS Code Remote SSH | Local UI + Remote execution |
Handling 10+ VS Code Windows?
Another problem I wrestled with: what if I need to work on ten or more Next.js projects simultaneously?
I tried four approaches:
| Approach | Performance | Experience | Stability | Verdict |
|---|---|---|---|---|
| 2017 iMac | Poor | Zero latency | Fans spin like helicopter | Rejected |
| High-end Windows | Excellent (needs 64GB RAM) | Great | NTFS slow on node_modules | Recommended |
| Windows WSL2 | Excellent | Near-local | Disconnects | Backup (needs config) |
| Remote server half globe away | Best compute | 200ms+ latency | SSH unstable | Not recommended |
Why 10+ Next.js Projects Drain Resources
Each Next.js (App Router) dev server consumes 500MB - 1.5GB memory. Ten projects means:
- Memory: At least 16-24GB just for Node.js, plus VS Code and browser. Start at 32GB, 64GB for stability.
- File watchers: Ten projects exhaust system file handles.
Fixing WSL2 Disconnects
If you use WSL2, the most common issue is random disconnects. Create .wslconfig in your Windows user directory:
[wsl2]
memory=48GB
processors=12
networkingMode=mirrorednetworkingMode=mirrored is key. It fixes most random network disconnection issues.
Also, never put projects in /mnt/c/. They must be in WSL internal paths like /home/user/projects. This alone gives 10x speed improvement.
My Workflow Summary
After all this experimentation, my current workflow:
- Daily development: Home Linux + Tailscale Exit Node (Netherlands VPS)
- Model downloads:
export HF_ENDPOINT=https://hf-mirror.com, mirrors max out bandwidth - Long-running tasks: SSH to VPS, run Aider or Claude Code in tmux, go to sleep
- Multi-project parallel: Windows + WSL2, properly configured
.wslconfig
Core principle: Make network problems disappear at the infrastructure level. Then you never think about it while coding.
Resources
- Tailscale Official
- Tailscale Exit Node Docs
- HuggingFace Mirror
- Aider - Terminal AI Coding Assistant
- tmux Complete Guide
If you are also working on remote AI development setups, feel free to reach out. This stuff has a learning curve, but once configured, it just works.
Author
Categories
More Posts
Need a Custom Solution?
Still stuck or want someone to handle the heavy lifting? Send me a quick message. I reply to every inquiry within 24 hours—and yes, simple advice is always free.
Newsletter
Join the community
Subscribe to our newsletter for the latest news and updates


