Skip to main content

A real doozie: Changing VM port group drops networking for 10 minutes

Update:  Support had nothing, so we are forced to chalk it up to the vDS stuff being wonky.  If anyone runs across this, would be interesting to hear their solution.

What follows is the ticket we've submitted (with a few additions that might help the general public).  This being the first time I've really had the chance to play around with and design my own VDS solution, mistakes were made.  Long story short I wanted to do X initially, switches didn't like that and our network guy recommended against it, so we did Y.  Turns out X does work, but we get weird un-sanctioned behaviour.  (X being 'let vSphere handle networking, just give me VLAN trunk switchports', Y being 'let's use LAGs/LACP/MLAG/etc'.)

I will update once we have a solution...

March12 - Quick update, I moved the rest of the VMs off the old VDS, roughly 25% of them had the same failure issue.  We're still waiting for something concrete from Extreme - I suspect with our failure scenario now 'non-reproducible', we're out of luck.

-----------------------
We are running into an issue while changing VMware port groups between virtual distributed switches.  Some VMs work, some VMs go into a state where they can communicate with other VMs in that new port group, but cannot ping the default gateway.

Environment overview:

  • vSphere 5.5, VDSes are 5.5 (not upgraded)
  • Switches are Extreme Networks
  • Old port group is the same VLAN, single uplink, different port group name, different virtual distributed switch.
  • New port group is the same VLAN, single uplink, different port group name, different virtual distributed switch.
  • Old VDS is using “route based on originating virtual port”
  • New VDS is using “route based on physical NIC load”
  • VMs are running successfully on the new port group/VDS/uplink
  • On the switch, MLAG is turned off and the switchports are configured as VLAN trunks


Process:

  1. We change the network label (port group) from ‘Old PG’ to ‘New PG’
  2. Press ‘OK’
  3. Symptoms begin


Symptoms:

  1. Happens on multiple hosts (all with known good VMs in respective port groups)
  2. VM cannot ping its Default Gateway
  3. Clients on other switches cannot ping the VM
  4. VM can ping other VMs on the new port group (same VLAN)
  5. VM cannot ping other VMs on the old port group (same VLAN)
  6. When we were able to reproduce the issue, it took ~10 minutes before connectivity was restored with no action on our part.


What we tried to restore connectivity:

  1. Move back to old PG (this worked some of the time)
  2. Disconnect NIC->press OK->reconnect NIC->press OK via vSphere Client  (did not work)
  3. Disconnect->press OK->change port group back to old->press OK->reconnect->press OK  (did not work)
  4. Reboot the guest OS  (did not work)
  5. On the guest OS: arp –d  (did not work)
  6. On the guest OS: disable/enable the network adapter (Windows and Linux)


Research indicates this is an issue on the physical switch, but we cannot find any (obvious) errors/configuration issues.


Some research links...

  • https://communities.vmware.com/thread/420029
  • https://communities.vmware.com/thread/462638


Comments

Popular posts from this blog

DFSR - eventid 4312 - replication just won't work

This warning isn't documented that well on the googles, so here's some google fodder:


You are trying to set up replication for a DFS folder (no existing replication)Source server is 2008R2, 'branch office' server is 2012R2 (I'm moving all our infra to 2012R2)You have no issues getting replication configuredYou see the DFSR folders get created on the other end, but nothing stagesFinally you get EventID 4312:
The DFS Replication service failed to get folder information when walking the file system on a journal wrap or loss recovery due to repeated sharing violations encountered on a folder. The service cannot replicate the folder and files in that folder until the sharing violation is resolved.  Additional Information:  Folder: F:\Users$\user.name\Desktop\Random Folder Name\  Replicated Folder Root: F:\Users$  File ID: {00000000-0000-0000-0000-000000000000}-v0  Replicated Folder Name: Users  Replicated Folder ID: 33F0449D-5E67-4DA1-99AC-681B5BACC7E5  Replication Group…

Fixing duplicate SPNs (service principal name)

This is a pretty handy thing to know:

SPNs are used when a specific service/daemon uses Kerberos to authenticate against AD. They map a specific service, port, and object together with this convention: class/host:port/name

If you use a computer object to auth (such as local service):
MSSQLSVC/tor-sql-01.domain.local:1433

If you use a user object to auth (such as a service account, or admin account):
MSSQLSVC/username:1433

Why do we care about duplicate SPNs? If you have two entries trying to auth using the same Kerberos ticket (I think that's right...), they will conflict, and cause errors and service failures.

To check for duplicate SPNs:
The command "setspn.exe -X

C:\Windows\system32>setspn -X
Processing entry 7
MSSQLSvc/server1.company.local:1433 is registered on these accounts:
CN=SERVER1,OU=servers,OU=resources,DC=company,DC=local
CN=SQL Admin,OU=service accounts,OU=resources,DC=company,DC=local

found 1 groups of duplicate SPNs. (truncated/sanitized)

Note that y…

Logstash to Nagios - alerting based on Windows Event ID

This took way longer than it should have to get going...so here's a config and brain dump...

Why?
You want to have a central place to analyze Windows Event/IIS/local application logs, alert off specific events, alert off specific situations.  You don't have the budget for a boxed solution.  You want pretty graphs.  You don't particularly care about individual server states.  (see rationale below - although you certainly have all the tools here to care, I haven't provided that configuration)

How?
ELK stack, OMD, NXlog agent, and Rsyslog.  The premise here is as follows:

Event generated on server into EventLogNXlog ships to Logstash inputLogstash filter adds fields and tags to specified eventsLogstash output sends to a passive Nagios service via the Nagios NSCA outputThe passive service on Nagios (Check_MK c/o OMD) does its thing w. alerting
OMD
Open Monitoring Distribution, but the real point here is Check_MK (IIRC Icinga uses this...).  It makes Nagios easy to use and main…