Skip to main content

Beware of L2 changes when using bonding driver and mode 6 (ALB)

I would preface this with the fact that I am not able (do not want) to reproduce the issue, but after discussions with our very experienced network guy (20 yrs) and another senior analyst, this is what we came up with.

Further, this could just be due to an old version of OpenE, or issues with our old switch stack.  If you would like to discuss otherwise, I am happy to hear from the experts.

TL;DR: If you have a 'whitebox' SAN that uses the Linux bonding driver in ALB mode, DO NOT make any kind of L2 changes on the SAN switch without shutting everything down first.  Also, patch your ESXi like ASAP.

We have an old legacy SAN (OpenE), it has two NICs that each connect to our old Netgear switch stack (one per switch).  Up until now, no issues with the unit.  Earlier this week I was adding a LAG to the switch stack as part of our new SAN deployment - the LAG did not utilize any active ports, and for a few hours seemed to be functioning just fine.

Then I get a ticket...xyz VMs are not responding.

I can ping the VM, but cannot RDP.  Weird. (in retrospect, probably normal behaviour in this situation)  Hm, now the ESXi hosts are being funny, disconnected from vCenter.  Went over to the console (SSH is disabled, of course) and manually restarted the management agents.  This got stuck at 'management agent starting' on all 3 hosts - like...I left it 30 minutes and no changes.  At this point cannot even load up the VMware client direct to the host, even though most VMs are still okay.

So I hard reboot the one host I know has 'non-critical' VMs on it.  It takes 20 minutes to even bring up the ESXi loading screen.  Checking the logs, showing certain LUNs are timing out - 3 LUNs, very similar to the new SAN I'm working on - it only has 3 LUNs.

Bam, disconnect the LAG and new SAN completely, reboot the host.  STILL not working.  I reboot all of the hosts.  Still nothing.  I go back to the log screen, still showing LUN timeouts...3 LUNs.  Oh...crap...our OpenE SAN has only 3 LUNs.  Sure enough, checking the actual LUN ID against what one of our legacy v4 hosts is showing proves this. (should have triple-checked the LUN ID in the first place)

What was weird about this - the OpenE SAN itself was still responsive - web GUI, ping, etc - but browsing the datastores would just get looong timeouts.

Company has now been fully down for an hour...time has run out.  (internal systems only, but still)

I power off EVERYTHING.  Switch stack, SANs, hosts, everything.  Bring it all up - leaving out the new SAN/switch stuff - and everything works fine.

Of course the vCenters (don't ask) and root CA were on the new SAN.  Friday night task, that is.

More investigation reveals that the network bond was configured with mode 6 (ALB), and this mode uses ARP negotiation (still not 100% on how that works).  ARP is layer 2, and making changes like a LAG would affect the ARP table.  Now, we still can't figure out why configuring un-used ports would cause the OpenE to freak out - but the other Dell SANs connected had zero issues, so we had to assume the bonding driver was the key.

Further, iSCSI may rely on having the exact same connection pathing, so if the ARP stuff changed the paths at all, it could in theory render only iSCSI broken - this is utter conjecture on my part.

At any rate...it was agreed that this was a freak event (unavoidable with our skills/experience), and we should get this migration done ASAP.


Oh yeah, the ESXi patching?  There was a bug patched on July 12, 2013 explicitly for the manual restart of management agents via console locking up under certain circumstances.  The bypass for that issue is restart via SSH console - you know, the one that's disabled by default.

Comments

Popular posts from this blog

DFSR - eventid 4312 - replication just won't work

This warning isn't documented that well on the googles, so here's some google fodder:


You are trying to set up replication for a DFS folder (no existing replication)Source server is 2008R2, 'branch office' server is 2012R2 (I'm moving all our infra to 2012R2)You have no issues getting replication configuredYou see the DFSR folders get created on the other end, but nothing stagesFinally you get EventID 4312:
The DFS Replication service failed to get folder information when walking the file system on a journal wrap or loss recovery due to repeated sharing violations encountered on a folder. The service cannot replicate the folder and files in that folder until the sharing violation is resolved.  Additional Information:  Folder: F:\Users$\user.name\Desktop\Random Folder Name\  Replicated Folder Root: F:\Users$  File ID: {00000000-0000-0000-0000-000000000000}-v0  Replicated Folder Name: Users  Replicated Folder ID: 33F0449D-5E67-4DA1-99AC-681B5BACC7E5  Replication Group…

Fixing duplicate SPNs (service principal name)

This is a pretty handy thing to know:

SPNs are used when a specific service/daemon uses Kerberos to authenticate against AD. They map a specific service, port, and object together with this convention: class/host:port/name

If you use a computer object to auth (such as local service):
MSSQLSVC/tor-sql-01.domain.local:1433

If you use a user object to auth (such as a service account, or admin account):
MSSQLSVC/username:1433

Why do we care about duplicate SPNs? If you have two entries trying to auth using the same Kerberos ticket (I think that's right...), they will conflict, and cause errors and service failures.

To check for duplicate SPNs:
The command "setspn.exe -X

C:\Windows\system32>setspn -X
Processing entry 7
MSSQLSvc/server1.company.local:1433 is registered on these accounts:
CN=SERVER1,OU=servers,OU=resources,DC=company,DC=local
CN=SQL Admin,OU=service accounts,OU=resources,DC=company,DC=local

found 1 groups of duplicate SPNs. (truncated/sanitized)

Note that y…

Logstash to Nagios - alerting based on Windows Event ID

This took way longer than it should have to get going...so here's a config and brain dump...

Why?
You want to have a central place to analyze Windows Event/IIS/local application logs, alert off specific events, alert off specific situations.  You don't have the budget for a boxed solution.  You want pretty graphs.  You don't particularly care about individual server states.  (see rationale below - although you certainly have all the tools here to care, I haven't provided that configuration)

How?
ELK stack, OMD, NXlog agent, and Rsyslog.  The premise here is as follows:

Event generated on server into EventLogNXlog ships to Logstash inputLogstash filter adds fields and tags to specified eventsLogstash output sends to a passive Nagios service via the Nagios NSCA outputThe passive service on Nagios (Check_MK c/o OMD) does its thing w. alerting
OMD
Open Monitoring Distribution, but the real point here is Check_MK (IIRC Icinga uses this...).  It makes Nagios easy to use and main…