Beware of L2 changes when using bonding driver and mode 6 (ALB)

I would preface this with the fact that I am not able (do not want) to reproduce the issue, but after discussions with our very experienced network guy (20 yrs) and another senior analyst, this is what we came up with.

Further, this could just be due to an old version of OpenE, or issues with our old switch stack.  If you would like to discuss otherwise, I am happy to hear from the experts.

TL;DR: If you have a 'whitebox' SAN that uses the Linux bonding driver in ALB mode, DO NOT make any kind of L2 changes on the SAN switch without shutting everything down first.  Also, patch your ESXi like ASAP.

We have an old legacy SAN (OpenE), it has two NICs that each connect to our old Netgear switch stack (one per switch).  Up until now, no issues with the unit.  Earlier this week I was adding a LAG to the switch stack as part of our new SAN deployment - the LAG did not utilize any active ports, and for a few hours seemed to be functioning just fine.

Then I get a ticket...xyz VMs are not responding.

I can ping the VM, but cannot RDP.  Weird. (in retrospect, probably normal behaviour in this situation)  Hm, now the ESXi hosts are being funny, disconnected from vCenter.  Went over to the console (SSH is disabled, of course) and manually restarted the management agents.  This got stuck at 'management agent starting' on all 3 hosts - like...I left it 30 minutes and no changes.  At this point cannot even load up the VMware client direct to the host, even though most VMs are still okay.

So I hard reboot the one host I know has 'non-critical' VMs on it.  It takes 20 minutes to even bring up the ESXi loading screen.  Checking the logs, showing certain LUNs are timing out - 3 LUNs, very similar to the new SAN I'm working on - it only has 3 LUNs.

Bam, disconnect the LAG and new SAN completely, reboot the host.  STILL not working.  I reboot all of the hosts.  Still nothing.  I go back to the log screen, still showing LUN timeouts...3 LUNs.  Oh...crap...our OpenE SAN has only 3 LUNs.  Sure enough, checking the actual LUN ID against what one of our legacy v4 hosts is showing proves this. (should have triple-checked the LUN ID in the first place)

What was weird about this - the OpenE SAN itself was still responsive - web GUI, ping, etc - but browsing the datastores would just get looong timeouts.

Company has now been fully down for an hour...time has run out.  (internal systems only, but still)

I power off EVERYTHING.  Switch stack, SANs, hosts, everything.  Bring it all up - leaving out the new SAN/switch stuff - and everything works fine.

Of course the vCenters (don't ask) and root CA were on the new SAN.  Friday night task, that is.

More investigation reveals that the network bond was configured with mode 6 (ALB), and this mode uses ARP negotiation (still not 100% on how that works).  ARP is layer 2, and making changes like a LAG would affect the ARP table.  Now, we still can't figure out why configuring un-used ports would cause the OpenE to freak out - but the other Dell SANs connected had zero issues, so we had to assume the bonding driver was the key.

Further, iSCSI may rely on having the exact same connection pathing, so if the ARP stuff changed the paths at all, it could in theory render only iSCSI broken - this is utter conjecture on my part.

At any rate...it was agreed that this was a freak event (unavoidable with our skills/experience), and we should get this migration done ASAP.


Oh yeah, the ESXi patching?  There was a bug patched on July 12, 2013 explicitly for the manual restart of management agents via console locking up under certain circumstances.  The bypass for that issue is restart via SSH console - you know, the one that's disabled by default.

Comments

Popular posts from this blog

Learning through failure - a keyboard creation journey

Learning Opportunities - Watching/listening list

DFSR - eventid 4312 - replication just won't work