We are deploying a new Openstack environment using juju 1.18.4 and require the compute and other nodes to utilise interface bonding for increased bandwidth availability. To do this we configure bonding of eth0 and eth1 via the MAAS preseed.
When we deploy using juju we find that eth0 is down and not in the bond following the juju agent and charm deploy. This means that when our deployment of Openstack using charms is complete we must reboot our compute, storage and infrastructure nodes in order to get the full performance of bond0.
This looks to be caused by an ifdown eth0 (seen in cloud-init-output.log) immediately prior to downloaded the tools and installing configuring the juju agent on the host.
Additionally when shmooshing services onto a combined infra node using lxc we need both bonding and bridging, and in this scenario the 'ifdown eth0' actually perturbs connectivity just long enough for the curl of the tools to fail, meaning the host/instance gets stuck in state pending. The agent install process does not seem to retry if that curl fails.
Juju and the MAAS provider should be aware that some deployment scenarios require custom network config such as bonding of interfaces.
The maas-provider and networking changed substantially in Juju 1.20.x We want to retest this with 1.20.1.