Synology NAS – Breaking a Bonded Network Interface

I thought I’d write a quick post about breaking a bonded network interface with a static IP on a Synology DS1812+ NAS, since I wasn’t able to get a definite answer online about what happens.

The tl;dr answer is: one of the interfaces retains the static address, while the other goes to DHCP, as you probably guessed.

On to some details for those interested, as it certainly wasn’t straightforward…

I use a Synology DS1812+ running DSM 6.2.2-24922 as shared storage for my home lab; essentially iSCSI storage for two ESXi hosts.

I recently got the urge to set up some personal networked storage on my home network via the remaining storage on the DS1812+ that will also be accessible online; I’ve maxed out my free OneDrive storage and am cheaping out on the €2/month for 100GB! Besides, I want to see how easy and reliable it is to set up my own “cloud” storage. The Synology seems more than capable of providing storage both locally and remotely via browser, agent and mobile app:

Currently my Synology (which has two NICs) has a bonded interface into my home lab, assigned with a static IP of

My plan was to split this bonded interface, leaving one interface on the home lab’s storage network of, and running the second one outside the home lab network, to my “personal” ethernet network on I’ll detail more of this on a future post detailing the actual setup of the Synology to act as cloud storage, but the point of this post is to focus on what happens when the bond is broken.

Having never broken a Synology bonded interface before, I was expecting it to work somewhat similarly to a teamed interface on an ESXi host for example, to simply be able to individually remove a physical interface from the team. But this isn’t the case. Your only option is to “Delete” the bond.

Which raised the questions; would one of the NICs retain the static IP address? And which one? And what would happen to the other interface? Will there be complete network outage while the change is made? The Synology knowledge base article wasn’t much help, simply stating “Once finished, you will see the two separate LAN interfaces in the Network Interface List“, followed by a screenshot showing no clue as to whether they were static or DHCP addressed.

If you guessed one would stay static, and one would be DHCP, you’d be right, and that’s what I was expecting. But a quick Google beforehand put doubt in my mind, as some people were reporting that both interfaces went to DHCP. Below post from Reddit seems to suggest a 50/50 chance, though I’m guessing the difference is due to model/DMS version, rather than just being completely to chance:

Obviously in a lab environment this isn’t too big a deal, but I like to treat my lab as if it’s a production environment. I wanted to get this bond broken with zero down time. Both interfaces going to DHCP was going to make that very difficult, maybe even impossible.

So in preparation for any issues with both interfaces going to DHCP, I set a quick DHCP scope at scope level with a single address for, running Wireshark to take a quick look at DHCP requests on this VLAN beforehand to ensure nothing was going to grab the NAS address and make the storage unavailable for the ESXi hosts. I was doubtful this would provide uninterrupted network connectivity, but at the very least I’d at least have management access to the NAS afterwards.

I rang a continuous ping from a workstation ( to both an ESXi host ( and a VM hosted on it (, and a ping back the way from the VM to the workstation.

The bond was deleted, and after about 20 seconds of the DSM “working”, I had a total of 3 dropped packets from the workstation to the storage address, and no dropped packets to the host or VM. The VM had zero dropped packets back to the workstation, and obviously no critical errors with the ESXi hosts being able to communicate with the storage at all times. No VM or host shutdowns.

LAN1 interface on the Synology retained the static address, and of course the LAN2 interface picked up the same address via the DHCP pool. This didn’t seem to cause any issues, but not something I wanted to leave for long, so I quickly changed the LAN2 interface to the new address on my home network.

So all in all, it was a fairly seamless failover, although I’d much prefer the ability to remove individual interfaces from a bond to avoid this kind of uncertainty. I don’t think the DHCP “fall-back” tactic is a great idea, especially not in a real production environment, and I’m not even sure it would have helped in a situation where both interfaces went to DHCP; the delay in both requesting a DHCP lease and the time it takes the Synology DSM to make the interface change is most likely long enough for host storage to be completely lost, likely resulting in host shutdown. Of course, it’s unlikely anybody is running production hosts on a Synology 1812+ anyway! But if you are, or like me you just want to minimise downtime, hopefully the above will help you if you need to break a bonded interface.

3 thoughts on “Synology NAS – Breaking a Bonded Network Interface

  1. Thank you! I had to break the bond on my Synology and was wondering what would happen. Your answer was correct and holds true for DSM 7 with update 2.

Leave a Reply

Your email address will not be published. Required fields are marked *