Multi-Node Replication
A two-device setup is simple, but real production architectures often involve three or more nodes — an always-on server, a developer laptop, and perhaps an office NAS. Designing the mesh correctly prevents sync storms and data inconsistencies.
Apply the hub-and-spoke model for most deployments: a single always-on VPS as the central hub that all edge devices connect to, instead of a fully meshed every-to-every topology.
| Topology | Nodes | Benefits | Drawbacks |
|---|---|---|---|
| Two-device pair | 2 | Simple | Single point of failure |
| Full mesh | 3+ | Highest redundancy | N² connections, sync storm risk |
| Hub-and-spoke | 3+ | Centralized, easy to manage | Hub is SPOF unless replicated |
| Hub-and-spoke + peer mesh | 4+ | Best of both | More configuration required |
Hub-and-Spoke Architecture
flowchart TD
HUB[Always-On VPS\n"hub" node] <-->|TLS sync| LAP[Developer Laptop]
HUB <-->|TLS sync| OFF[Office NAS]
HUB <-->|TLS sync| CI[CI/CD Runner]
LAP -.->|direct optional| OFF
In this model:
- The VPS hub is always connected and acts as the authoritative source of sync
- Edge devices sync to and from the hub even when offline from each other
- Edge-to-edge direct sync is optional (lower latency when both are online)
Step-by-Step: Three-Node Setup
1 — Install and Start Syncthing on All Nodes
# Run on each node (VPS, laptop, NAS)
sudo apt install syncthing -y
systemctl --user enable --now syncthing
syncthing --device-id # Note this ID for each node
2 — Collect Device IDs
On each node, record the Device ID:
VPS Hub: K3X2R...-HUB
Laptop: M7PTQ...-LAP
Office NAS: R9WZV...-NAS
3 — Add Devices on the Hub
On the VPS hub, add both edge devices via GUI or config:
<device id="M7PTQ...-LAP" name="developer-laptop">
<address>dynamic</address>
</device>
<device id="R9WZV...-NAS" name="office-nas">
<address>dynamic</address>
</device>
Repeat on each edge device — add the hub's Device ID. For edge-to-edge sync, also add each other's IDs on the respective edge nodes.
4 — Share the Folder on the Hub
<folder id="project-data" path="/var/www/html/data" type="sendreceive">
<device id="M7PTQ...-LAP"></device>
<device id="R9WZV...-NAS"></device>
</folder>
In the GUI: Add Folder → Share With — select both devices.
5 — Accept on Edge Devices
Each edge device will receive an invitation. Accept it and configure the local path:
- Laptop:
~/Documents/project-data - NAS:
/mnt/array/project-data
Propagation Timing
sequenceDiagram
participant LAP as Laptop (offline)
participant HUB as VPS Hub (always on)
participant NAS as Office NAS
LAP->>HUB: Laptop comes online, syncs changes
HUB->>NAS: Hub propagates changes to NAS
Note over LAP,NAS: NAS receives changes even if Laptop and NAS never connect directly
Verifying Replication Health
STKEY="your-api-key"
CERT="$HOME/.local/share/syncthing/https-cert.pem"
# Per-folder sync state
curl -fs -H "X-API-Key: $STKEY" \
"https://localhost:8384/rest/db/status?folder=project-data" \
--cacert "$CERT" | jq '{state:.state, needBytes:.needBytes, errors:.errors}'
# Connected peers
curl -fs -H "X-API-Key: $STKEY" \
"https://localhost:8384/rest/system/connections" \
--cacert "$CERT" | jq '.connections | to_entries[] | select(.value.connected) | .key'
Conflict Prevention in Multi-Node Environments
With three or more nodes editing the same files:
- Assign a primary node as the "source of truth" — only it should write
- Use folder types — set edge devices to
receiveonlyif they only consume data - Never edit the same file simultaneously on two nodes — use work queues or locking
- See How Syncthing Handles Conflicts for conflict file resolution
Common Mistakes
| Mistake | Effect | Fix |
|---|---|---|
| Full mesh with 5+ nodes | Sync storm — every change triggers N updates | Use hub-and-spoke; let hub propagate |
| Same folder path on all nodes | Works but confusing; breaks if paths differ | Use consistent paths or document per-node paths |
| Not labelling devices with descriptive names | Hard to audit config.xml | Set meaningful name attributes on each <device> |
| Hub down = edge devices can't sync to each other | Single point of failure | Add edge-to-edge pairs for critical folder pairs |