Skip to main content

Technical version: EqualLogic PS series controller firmware update

This documents the process for updating your EqualLogic PS5000* to a newer firmware (from 4.0.6 to 4.1.4 in our case). Should work for the PS6000* as well. Note that the material here is from the document 'PS Series Storage Arrays - Updating Storage Array Firmware' by Dell EqualLogic, the tech case sent over from Carl at Dell, and my experiences from all this. It's pretty unnerving to have a key piece of hardware die on the ONE piece of equipment that stores everything in the company, but now that I've run through this once, it's not really all that bad to do.

So if you have a controller update fail, it's probably just in need of the following!
-------------------------------------------------------------------------------------------------

Here's the process for getting the firmware kit from the internet to the controller.

1. On your SAN monitor box (XP or whatever), download the firmware from EqualLogic's tech site (logging in is another story!!). DO NOT rename the file to make it easier for you to FTP. You will be foiled. Place it in your C:\ (for ease of use).

2. 'open' ftp to an IP assigned to one of the controller's ethernet ports. Set 'binary' 'put' the kit file onto the controller ftp site.

2. You can only update online from 4.0.1 to 4.0.2, and the SAN must be offline from 4.0.1 to 4.1.1, for example. So...all your hosts, in our case the 10 ESX hosts, must be powered off to prevent any issues of LUNs losing connection, or a host powering on and trying access the SAN while it's updating.

3. Once everything is off, SSH onto the controller IP (grpadmin login). Run the 'update' command. Say 'yes' when prompted.

4. Wait for the update to complete. If it takes longer than a few minutes, there's a good chance that it could have failed. When ours failed, it took about 5 minutes of 'retrying' the operation.

5. If it completes successfully, 'restart' the array.

6. Once you're back in, run 'member select show controllers'. You should see both controllers with the version of firmware you updated to. If you are missing a controller, or don't see the correct version, you'll need to follow my other steps.

-------------------------------------------------------------------------------------------------
Get a serial connection (same setup as a Cisco connection - 9600/8/1/n/n) to the controller that is missing. You will need physical hands-on access to it, as the break sequence has a small window.

Restart the passive controller from the CFE> prompt with the command "reload -softreload" and 'yes'.

In between the SP and NP messages, type Ctrl-P then Space. Do that a few times. You should get a prompt like this:

Boot Ethernet Port: []

Hit backspace a few times to make sure you're not accidentally putting anything in that field. Then hit return until you see "Boot Path". Type in "backup", and hit return until you get to the CFE prompt. Enter "reload" there to reboot on the old image.

Boot Ethernet Port: []
CM Boot IP Address: []
NetWork Mask []
Boot Gateway IP Address: []
Boot Server IP Address: []
Boot Path (primary/backup): [primary] backup
CPU0 File Name: [eqlstor.gz]
CPU1 File Nmae: [eqlqrq.gz]
Core Dump Ethernet Port: []
Core Dump IP Address: []
Core Dump Network Mask: []
Core Dump Gateway IP Addressp: []
Core Dump Server IP Address: []
Core Dump Enable: []
Core Dump File Name: []
Pss Log Enable: []
CPU0 Memory Alloc: [75%]
CFE>reload


You should then boot properly to 4.0.6. Login as grpadmin, and do the following:

1. Get to bash
2. look at the primary directory in the CF card, and verify that it's missing a lot.
3. rm the files that remain in the primary directory.
4. cp the files from backup to primary.
5. restart the array.

CLI> support exec /bin/bash
cli-child-3.2# cd /pss
cli-child-3.2# ls -al
total 29294
dr-xr-xr-x 1 root wheel 16384 Dec 31 1979 .
dr-xr-xr-x 1 root wheel 14973061 Aug 7 22:25 ..
dr-xr-xr-x 1 root wheel 1024 Jan 24 2008 TOOLS
-r-xr-xr-x 1 root wheel 1107 Aug 21 07:54 agent.cnf
dr-xr-xr-x 1 root wheel 1024 Jan 24 2008 backup
-r-xr-xr-x 1 root wheel 218 Oct 7 05:07 config.txt
dr-xr-xr-x 1 root wheel 1024 Oct 7 04:11 dumps
-r-xr-xr-x 1 root wheel 0 Jan 24 2008 eco.a
-r-xr-xr-x 1 root wheel 0 Jan 24 2008 eqlcfe.gz
-r-xr-xr-x 1 root wheel 601 Oct 7 03:30 passwd
dr-xr-xr-x 1 root wheel 1024 Oct 7 05:07 primary
dr-xr-xr-x 1 root wheel 1024 Oct 7 03:57 update
cli-child-3.2# ls -al primary
total 760
dr-xr-xr-x 1 root wheel 1024 Oct 7 05:07 .
dr-xr-xr-x 1 root wheel 16384 Dec 31 1979 ..
-r-xr-xr-x 1 root wheel 1826 Oct 7 05:07 build.c
-r-xr-xr-x 1 root wheel 369664 Oct 7 05:18 ccom0102.bin
cli-child-3.2# ls -al backup
total 40934
dr-xr-xr-x 1 root wheel 1024 Jan 24 2008 .
dr-xr-xr-x 1 root wheel 16384 Dec 31 1979 ..
-r-xr-xr-x 1 root wheel 1813 Oct 7 04:01 build.c
-r-xr-xr-x 1 root wheel 393216 Oct 7 04:01 ccom0103.bin
-r-xr-xr-x 1 root wheel 393216 Oct 7 04:01 cemi0611.bin
-r-xr-xr-x 1 root wheel 203 Oct 7 04:01 cksums
-r-xr-xr-x 1 root wheel 636061 Oct 7 04:01 eqlqrq.gz
-r-xr-xr-x 1 root wheel 18069007 Oct 7 04:01 eqlstor.gz
-r-xr-xr-x 1 root wheel 929027 Oct 7 04:01 eqlx811.gz
-r-xr-xr-x 1 root wheel 393216 Oct 7 04:01 sumo0310.bin
-r-xr-xr-x 1 root wheel 120989 Oct 7 04:01 update.sh
cli-child-3.2# rm primary/*
cli-child-3.2# cp backup/* primary (this takes about 10 minutes, so be patient - it really does)
cli-child-3.2# exit
CLI> restart


Then catch the boot process again when it asks for Ctrl-P, and enter Ctrl-P Space again. Change the Boot Path back to primary from backup, and reload from CFE.

Once all that's done, you run the 'member select show controllers' command. You should see both controllers again! Yay!

Comments

Popular posts from this blog

DFSR - eventid 4312 - replication just won't work

This warning isn't documented that well on the googles, so here's some google fodder:


You are trying to set up replication for a DFS folder (no existing replication)Source server is 2008R2, 'branch office' server is 2012R2 (I'm moving all our infra to 2012R2)You have no issues getting replication configuredYou see the DFSR folders get created on the other end, but nothing stagesFinally you get EventID 4312:
The DFS Replication service failed to get folder information when walking the file system on a journal wrap or loss recovery due to repeated sharing violations encountered on a folder. The service cannot replicate the folder and files in that folder until the sharing violation is resolved.  Additional Information:  Folder: F:\Users$\user.name\Desktop\Random Folder Name\  Replicated Folder Root: F:\Users$  File ID: {00000000-0000-0000-0000-000000000000}-v0  Replicated Folder Name: Users  Replicated Folder ID: 33F0449D-5E67-4DA1-99AC-681B5BACC7E5  Replication Group…

Fixing duplicate SPNs (service principal name)

This is a pretty handy thing to know:

SPNs are used when a specific service/daemon uses Kerberos to authenticate against AD. They map a specific service, port, and object together with this convention: class/host:port/name

If you use a computer object to auth (such as local service):
MSSQLSVC/tor-sql-01.domain.local:1433

If you use a user object to auth (such as a service account, or admin account):
MSSQLSVC/username:1433

Why do we care about duplicate SPNs? If you have two entries trying to auth using the same Kerberos ticket (I think that's right...), they will conflict, and cause errors and service failures.

To check for duplicate SPNs:
The command "setspn.exe -X

C:\Windows\system32>setspn -X
Processing entry 7
MSSQLSvc/server1.company.local:1433 is registered on these accounts:
CN=SERVER1,OU=servers,OU=resources,DC=company,DC=local
CN=SQL Admin,OU=service accounts,OU=resources,DC=company,DC=local

found 1 groups of duplicate SPNs. (truncated/sanitized)

Note that y…

Logstash to Nagios - alerting based on Windows Event ID

This took way longer than it should have to get going...so here's a config and brain dump...

Why?
You want to have a central place to analyze Windows Event/IIS/local application logs, alert off specific events, alert off specific situations.  You don't have the budget for a boxed solution.  You want pretty graphs.  You don't particularly care about individual server states.  (see rationale below - although you certainly have all the tools here to care, I haven't provided that configuration)

How?
ELK stack, OMD, NXlog agent, and Rsyslog.  The premise here is as follows:

Event generated on server into EventLogNXlog ships to Logstash inputLogstash filter adds fields and tags to specified eventsLogstash output sends to a passive Nagios service via the Nagios NSCA outputThe passive service on Nagios (Check_MK c/o OMD) does its thing w. alerting
OMD
Open Monitoring Distribution, but the real point here is Check_MK (IIRC Icinga uses this...).  It makes Nagios easy to use and main…