Networking and mesh configuration
Ployz networking is built on two layers: a WireGuard overlay mesh that connects every node at the IP level, and a set of cluster services — gateway, DNS, and NATS — that run on top of the overlay. This page explains how those layers are configured and how they interact.
WireGuard overlay mesh
Section titled “WireGuard overlay mesh”Each node in a Ployz cluster runs a WireGuard interface (wg0) and gets a unique subnet carved from the cluster’s address range. Workload containers and sidecars bind to addresses inside that subnet, and they can reach any other node’s addresses directly over the encrypted tunnel.
Address allocation
Section titled “Address allocation”Two fields in config.toml control the address space:
| Field | Default | Description |
|---|---|---|
cluster_cidr | 10.101.0.0/16 | The full address range shared by all nodes in the mesh |
subnet_prefix_len | 24 | The prefix length of each node’s slice of that range |
With the defaults, a /16 range split into /24 subnets gives 256 possible nodes and 254 usable addresses per node. To support more nodes, widen the CIDR. To give each node more addresses, decrease the prefix length.
Endpoint ordering
Section titled “Endpoint ordering”When a node advertises its network addresses to peers, Ployz filters and orders them according to a fixed policy. The ordering matters because it becomes the candidate order WireGuard uses for endpoint selection and rotation:
- Dropped entirely: loopback, link-local, IPv6 ULA, interfaces below the minimum MTU for the overlay, and container, bridge, or helper interfaces that are not cluster-facing.
- Ordered by likely usefulness:
- Private RFC1918 addresses first
- CGNAT addresses second
- Public addresses after that
Public-IP discovery is folded into the same ordering. Directly routable private paths are preferred over broader internet paths, but NAT-discovered public reachability is still advertised when needed.
Mesh commands
Section titled “Mesh commands”Use ployzctl mesh to manage the lifecycle of a mesh network.
ployzctl mesh init
Section titled “ployzctl mesh init”Create a new mesh network on this node and activate it as the current network. Pass a name as the argument, or use --name-stdin to read it from standard input.
ployzctl mesh init my-clusterAfter init, the node generates a WireGuard keypair, allocates a subnet from cluster_cidr, and writes the network record to the store.
ployzctl mesh create
Section titled “ployzctl mesh create”Create a named network record without activating it. Useful when you want to set up a network before starting it.
ployzctl mesh create stagingployzctl mesh start
Section titled “ployzctl mesh start”Start the WireGuard interface and sidecars for an existing network.
ployzctl mesh start my-clusterployzctl mesh stop
Section titled “ployzctl mesh stop”Stop the active mesh. Pass --force to stop even if workloads are still running.
ployzctl mesh stopployzctl mesh stop --forceployzctl mesh join —token
Section titled “ployzctl mesh join —token”Join this node to an existing mesh using an invite token generated on the primary node. The token encodes the network’s public key, CIDR, and initial peer endpoints.
ployzctl mesh join --token "eyJ..."# or read from stdinployzctl mesh join --token-stdinAfter joining, the daemon connects to NATS through the overlay, syncs routing state, and begins receiving peer updates.
ployzctl mesh status / list
Section titled “ployzctl mesh status / list”Inspect active network state:
ployzctl mesh list # list all known networksployzctl mesh status my-cluster # detailed peer and subnet stateployzctl mesh ready # exit 0 if mesh is healthy, 1 otherwiseployzctl mesh ready --json # machine-readable health reportNATS control-plane
Section titled “NATS control-plane”NATS is the native substrate for cluster coordination. It provides durable key-value records, streams for ordered events, request/reply for foreground commands, and work queues for distributed operations.
NATS is managed as a sidecar by the daemon — you do not run or configure it separately. The daemon starts NATS when the mesh starts and adopts a running NATS process on restart if the configuration matches.
The data plane — WireGuard tunnels, the gateway, DNS, NATS, and running containers — continues to serve last good state when ployzd is absent. Daemon restart does not disrupt running workloads or break mesh connectivity.
Gateway
Section titled “Gateway”The HTTP/HTTPS gateway runs as a sidecar on each node and proxies inbound traffic to the correct workload container based on routing rules published to the cluster store.
Configure the gateway through the daemon’s config.toml:
| Field | Default | Description |
|---|---|---|
gateway_listen_addr | 0.0.0.0:80 | HTTP listen address |
gateway_https_listen_addr | (unset) | HTTPS listen address — enables TLS when set |
gateway_threads | 2 | Worker threads for the gateway process |
HTTPS support
Section titled “HTTPS support”When gateway_https_listen_addr is set, the gateway serves TLS. Certificates are loaded from the cluster’s routing store using SNI-based selection. You can also supply static certificate paths via the gateway’s environment variables (PLOYZ_GATEWAY_TLS_CERT_PATH and PLOYZ_GATEWAY_TLS_KEY_PATH), but both paths must be set together with gateway_https_listen_addr.
gateway_listen_addr = "0.0.0.0:80"gateway_https_listen_addr = "0.0.0.0:443"gateway_threads = 4Cluster DNS
Section titled “Cluster DNS”Each node runs a DNS sidecar that answers queries for cluster service names. Services deployed to Ployz are automatically registered in the cluster DNS and are reachable by name from any node in the mesh.
The DNS server listens on the node’s overlay IPv6 address on port 53. In Docker runtime mode, it may also bind a bridge address so containers in the ployz-networking namespace can resolve cluster names.
You do not configure cluster DNS directly in config.toml. The daemon provisions and configures the DNS sidecar automatically based on the active network and the node’s overlay address. To expose DNS metrics, set dns_metrics_listen_addr in config.toml or the PLOYZ_DNS_METRICS_LISTEN_ADDR environment variable.
ZFS transfer port
Section titled “ZFS transfer port”When a volume is migrated between nodes, the daemon opens a direct TCP connection from the destination node to the source node to stream the ZFS dataset. The source node listens on zfs_transfer_port for these connections.
| Field | Default | Override |
|---|---|---|
zfs_transfer_port | 4319 | PLOYZ_ZFS_TRANSFER_PORT or --zfs-transfer-port |
Ensure that port 4319 (or your configured value) is reachable between cluster nodes on the overlay network. The transfer always uses the overlay address, not the public IP.
If you run a host firewall such as nftables or iptables, allow inbound TCP on the transfer port for overlay addresses:
# Allow ZFS transfer from overlay addresses (example: 10.101.0.0/16)nft add rule inet filter input ip saddr 10.101.0.0/16 tcp dport 4319 acceptOn macOS, the daemon runs on the host and bridges to the Docker VM. ZFS volumes are not supported in the Docker runtime, so the transfer port is unused in the default macOS configuration.
macOS networking architecture
Section titled “macOS networking architecture”On macOS, ployzd runs on the host. The WireGuard interface, NATS, gateway, DNS, and all workload containers run inside the Docker Desktop Linux VM. The daemon bridges the two environments:
macOS host Docker Desktop VM+----------------+ +------------------------------+| ployzd daemon | | ployz-networking container || | WG bridge | wg0 overlay interface || OverlayBridge +---------------->| || | | nats-server || NATS bridge +---------------->| ployz-gateway || | | ployz-dns || | | workload containers |+----------------+ +------------------------------+OverlayBridge uses userspace WireGuard and a smoltcp TCP stack to bridge the macOS host into the container overlay network. NATS, gateway, and DNS bind on the node’s overlay IPv6 address so other mesh nodes can reach them directly.