LogoSu Jiang
  • Blog
  • Knowledge Base
  • About Me
Tailscale + VPS: The Ultimate Network Setup for Remote AI Coding
2026/01/14

Tailscale + VPS: The Ultimate Network Setup for Remote AI Coding

Use Tailscale mesh networking with a Netherlands VPS as an Exit Node to solve HuggingFace timeouts, GitHub clone failures, and API connectivity issues. Complete workflow for remote AI development.

Tailscale + VPS: The Ultimate Network Setup for Remote AI Coding

The biggest pain point in AI development from home is not GPU power. It is network connectivity. HuggingFace times out. GitHub clones fail. Conda downloads at 10 KB/s.

I spent weeks figuring this out. If you have a VPS in a well-connected region, combined with Tailscale, you can solve almost all network problems. This is my journey and a complete guide.

Tailscale Network ArchitectureHome Linux machine connects to Netherlands VPS via Tailscale. VPS acts as Exit Node, routing all traffic through it.Home LinuxAI Development Machine100.x.x.xTailscale MeshWireGuard Encrypted TunnelNAT Traversal / P2PNetherlands VPSExit Node100.x.x.yHuggingFace / OpenAI

The Pain: Network is the Silent Killer

You have experienced these scenarios:

  • pip install transformers hangs on Resolving for minutes
  • git clone a 5GB model repo, fails overnight
  • OpenAI API calls timeout constantly
  • HuggingFace .safetensors downloads reset mid-way

I have a decent Linux machine at home. Hardware is fine. Network is not. Then I got a Netherlands VPS (cheap) and noticed the same code runs flawlessly there.

The question became: how do I make my home machine "borrow" the Netherlands network?


Solution 1: Tailscale Exit Node (Recommended)

This is my current setup. The principle is simple: make the Netherlands VPS a Tailscale exit node, and route all traffic from the home machine through it.

To the outside world, your home machine appears to be in the Netherlands.

Step 1: Install Tailscale on Both Machines

Go to Tailscale, register an account, then install on both machines:

curl -fsSL https://tailscale.com/install.sh | sh

Login:

sudo tailscale up

A link will appear for browser authorization. After that, tailscale status should show both machines online.

Step 2: Configure Exit Node on the VPS

On your Netherlands VPS:

# Enable IP forwarding (required)
echo 'net.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
echo 'net.ipv6.conf.all.forwarding = 1' | sudo tee -a /etc/sysctl.d/99-tailscale.conf
sudo sysctl -p /etc/sysctl.d/99-tailscale.conf

# Advertise as exit node
sudo tailscale up --advertise-exit-node

Then go to Tailscale Admin Console, find your VPS, click Edit route settings, and check Use as exit node.

Exit Node Configuration FlowVPS: advertise-exit-nodeRun in terminalAdmin ConsoleCheck "Use as exit node"Home: --exit-node=xxxAll traffic via VPS

Step 3: Use the Exit Node from Home

Assuming your VPS is named dutch-vps in Tailscale:

sudo tailscale up --exit-node=dutch-vps

Done. All your traffic now routes through the Netherlands.

Verify

curl ipinfo.io

If the returned IP and region show Netherlands, you are set.

Quick Toggle Aliases

Add these to your .bashrc:

# VPN on (route through VPS)
alias vpn-on='sudo tailscale up --exit-node=dutch-vps'

# VPN off (direct connection)
alias vpn-off='sudo tailscale up --exit-node='

Solution 2: HuggingFace Mirror (Fastest Downloads)

If your main issue is downloading large models from HuggingFace, use a mirror instead of a proxy. It is faster.

Add this to your terminal or .bashrc:

export HF_ENDPOINT=https://hf-mirror.com

Now huggingface-cli download and the transformers library will use the mirror automatically. Download speeds can max out your bandwidth.

Note: This only solves HuggingFace downloads. It does not help with OpenAI API or GitHub.


Solution 3: Precise Proxying (Proxychains)

If you do not want to route all traffic through the VPS, only specific scripts, use Proxychains.

Assuming you have a local proxy client (like Clash) on port 7890:

Method A: Temporary Environment Variables

export http_proxy=http://127.0.0.1:7890
export https_proxy=http://127.0.0.1:7890
python my_ai_script.py

Method B: Proxychains Force Takeover

Some libraries (especially those with C++ backends) ignore environment variables. Use Proxychains:

sudo apt install proxychains4

Edit /etc/proxychains4.conf, change the last line to:

http 127.0.0.1 7890

Then run:

proxychains4 python my_ai_script.py

Is Remote SSH Good for AI Coding?

This is another question I have been exploring. The answer: absolutely yes.

Advantages of Remote SSH AI Coding24/7 Always Runningtmux + AI Agent persistenceEnvironment Isolationpip/npm do not pollute localFast NetworkAPI calls / Model downloadsVS Code Remote SSHLocal UX + Cloud computeAider / Claude CodeTerminal-native AI coding

Why Remote Servers Work Better

  1. 24/7 Uptime: Run AI tasks in tmux on the server. Close your laptop, it keeps running. Wake up to finished code.

  2. Environment Isolation: AI-generated code often needs many dependencies. Install on server, not your local system.

  3. Network Advantage: Overseas servers have more stable connections to OpenAI, Anthropic APIs.

  4. VS Code Remote SSH: Local machine handles only UI. All code execution happens on the server. Feels like local development.

Recommended Tool Stack

ToolPurpose
AiderTerminal AI coding assistant, edits files, commits to Git
Claude CodeAnthropic official CLI, great for complex tasks
tmuxSession persistence, survives disconnects
VS Code Remote SSHLocal UI + Remote execution

Handling 10+ VS Code Windows?

Another problem I wrestled with: what if I need to work on ten or more Next.js projects simultaneously?

I tried four approaches:

ApproachPerformanceExperienceStabilityVerdict
2017 iMacPoorZero latencyFans spin like helicopterRejected
High-end WindowsExcellent (needs 64GB RAM)GreatNTFS slow on node_modulesRecommended
Windows WSL2ExcellentNear-localDisconnectsBackup (needs config)
Remote server half globe awayBest compute200ms+ latencySSH unstableNot recommended

Why 10+ Next.js Projects Drain Resources

Each Next.js (App Router) dev server consumes 500MB - 1.5GB memory. Ten projects means:

  • Memory: At least 16-24GB just for Node.js, plus VS Code and browser. Start at 32GB, 64GB for stability.
  • File watchers: Ten projects exhaust system file handles.

Fixing WSL2 Disconnects

If you use WSL2, the most common issue is random disconnects. Create .wslconfig in your Windows user directory:

[wsl2]
memory=48GB
processors=12
networkingMode=mirrored

networkingMode=mirrored is key. It fixes most random network disconnection issues.

Also, never put projects in /mnt/c/. They must be in WSL internal paths like /home/user/projects. This alone gives 10x speed improvement.


My Workflow Summary

After all this experimentation, my current workflow:

  1. Daily development: Home Linux + Tailscale Exit Node (Netherlands VPS)
  2. Model downloads: export HF_ENDPOINT=https://hf-mirror.com, mirrors max out bandwidth
  3. Long-running tasks: SSH to VPS, run Aider or Claude Code in tmux, go to sleep
  4. Multi-project parallel: Windows + WSL2, properly configured .wslconfig

Core principle: Make network problems disappear at the infrastructure level. Then you never think about it while coding.


Resources

  • Tailscale Official
  • Tailscale Exit Node Docs
  • HuggingFace Mirror
  • Aider - Terminal AI Coding Assistant
  • tmux Complete Guide

If you are also working on remote AI development setups, feel free to reach out. This stuff has a learning curve, but once configured, it just works.

All Posts

Author

avatar for Su Jiang
Su Jiang

Categories

  • AI探索
Tailscale + VPS: The Ultimate Network Setup for Remote AI CodingThe Pain: Network is the Silent KillerSolution 1: Tailscale Exit Node (Recommended)Step 1: Install Tailscale on Both MachinesStep 2: Configure Exit Node on the VPSStep 3: Use the Exit Node from HomeVerifyQuick Toggle AliasesSolution 2: HuggingFace Mirror (Fastest Downloads)Solution 3: Precise Proxying (Proxychains)Method A: Temporary Environment VariablesMethod B: Proxychains Force TakeoverIs Remote SSH Good for AI Coding?Why Remote Servers Work BetterRecommended Tool StackHandling 10+ VS Code Windows?Why 10+ Next.js Projects Drain ResourcesFixing WSL2 DisconnectsMy Workflow SummaryResources

More Posts

AI脑萎缩了
AI探索

AI脑萎缩了

AI脑萎缩了

avatar for Su Jiang
Su Jiang
2025/10/28
苏江:我做了个AI做课系统,几天搞定100页PPT+几万字稿子
AI探索

苏江:我做了个AI做课系统,几天搞定100页PPT+几万字稿子

苏江:我做了个AI做课系统,几天搞定100页PPT+几万字稿子

avatar for Su Jiang
Su Jiang
2025/09/15
苏江:分享个自制的公众号排版编辑器,适合保存AI生成的Markdown格式文档
AI探索

苏江:分享个自制的公众号排版编辑器,适合保存AI生成的Markdown格式文档

苏江:分享个自制的公众号排版编辑器,适合保存AI生成的Markdown格式文档

avatar for Su Jiang
Su Jiang
2025/07/27

Need a Custom Solution?

Still stuck or want someone to handle the heavy lifting? Send me a quick message. I reply to every inquiry within 24 hours—and yes, simple advice is always free.

100% Privacy. No spam, just solutions.

Newsletter

Join the community

Subscribe to our newsletter for the latest news and updates

LogoSu Jiang

AI Developer · Writer · Investor | Exploring AI Applications

TwitterX (Twitter)Email

WeChat: iamsujiang

WeChat QR Code
Scan to add WeChat
Product
  • Features
  • Pricing
  • FAQ
Resources
  • Blog
  • Knowledge Base
Company
  • About Me
  • Contact
  • Waitlist
Legal
  • Cookie Policy
  • Privacy Policy
  • Terms of Service
© 2026 Su Jiang All Rights Reserved.