Unexpected Connectivity Issues After Server Migration: Root Cause and Fix

A surprisingly simple issue caused several days of outages in my backup automation, SFTP connections, and CIFS mounts.

Recently, my server was silently moved to a different segment of the Hetzner infrastructure. During this relocation, the IPv6 subnet assigned to my host was changed, but this change was not communicated to me. As a result, my server continued using its old IPv6 address, even though that subnet no longer routed traffic to the outside world.

This created a very confusing situation:

  • The server still had a valid-looking IPv6 address.
  • The default IPv6 route existed and appeared correct.
  • DNS for the Storage Box resolved primarily to IPv6.
  • All outgoing IPv6 connectivity was dead (e.g., ping6 google.com).
  • Backup tools, SFTP, smbclient, and mount.cifs all tried IPv6 first and failed silently.
  • Connections only worked via IPv4 when forced manually.

Effectively, the system had half-working IPv6: enabled locally, but unusable externally.

Root Cause

After contacting support, it turned out the correct IPv6 subnet had been updated to:

2a01:4f8:212:3d66::/64

while my server was still configured with the old one:

2a01:4f8:172:59::/64

Because of this mismatch, all IPv6 traffic was routed into a dead path.

Fix

I updated the Netplan configuration to use the new IPv6 subnet:

addresses:
  - 138.201.x.x/32
  - 2a01:4f8:212:3d66::2/64
gateway6: fe80::1

After applying the change:

netplan apply
ping6 google.com → working again
sftp / cifs → working without IPv4 forcing

Outcome

A small configuration mismatch caused major issues because many modern tools prefer IPv6 automatically. Once the correct subnet was applied, everything returned to normal.

If you experience unpredictable connectivity failures after a datacenter migration, especially related to IPv6, always verify that your IPv6 prefix still matches what your provider has assigned.

Comments