This process can be incredibly time consuming and add a large load to the environment. One of the few ways this could be fixed is by migrating all virtual machines to a newly created datastore.
VMware shows that 414.59GB of free space is available.ĮqualLogic shows that 272.09GB of free space is available. This can cause an EqualLogic volume to appear to be using more space than is actually in use. In the case of VMware and EqualLogic (EQL), although both can be used in conjunction with one another and both will continue to grow together, VMware does not report to the EqualLogic when freed blocks can be returned to the free space pool when space is cleared up. This can be applied at both the hypervisor level and the storage area network level.
Neither seemed to notice the hiccup, neither logged anything related to disk issues.Īll in all the update process was pretty painless, even with the array under load.For many businesses, using thin provisioned volumes is the best way to optimize a virtual machine environment.
The I/O Analyzer continued to happily chug along during the hiccup, as well as the vCenter Windows VM, both of which were active during the update. The now-secondary, was-primary, controller then reboots and applies the firmware. This is short enough that no actual grief ensued. I/Os stopped for about 27 seconds during the cut-over (26929 ms). This is where a small hiccup is noticeable. Once the updated secondary comes up, the array fails over to it. The was-secondary, soon-to-be-primary, controller reboots with the new firmware. First it reboots the (non-active) secondary controller, and applies the update. Then to activate the staged firmware, you have to reboot the array. The first part just uploads and stages the firmware. Then we ran the update procedure from the Group Manager GUI. Next we started a I/O Analyzer disk benchmark run to put some I/O load against the array. If you have an active support contract you can get the docs and firmware there.) We logged into the support site and downloaded the firmware. (EqualLogic docs are locked behind their support site, which is unfortunate. We followed the usual EqualLogic firmware update procedure. We deployed VMware’s handy I/O Analyzer, which is a simple to use appliance based on IOmeter for disk benchmarking. This is a minimalistic all-your-eggs-in-one-basket configuration. VMs live on datastores provided by the array. In our test vCenter environment, we have one ESXi host connected to one dual controller PS6000 EqualLogic array. We were curious how bad the interruption would be in a single member pool, so we tested it. In a single member pool or if you are short of room, using a maintenance pool is not an option. We do this in production and there is indeed no interruption whatsoever, using a maintenance pool works well. Once the member is evacuated, update it, reboot it, and put it back into the production pool, and move to the next array. Depending on how much you have to move this may take a few hours or overnight. When you assign a member array to the Maintenance Pool, the volumes on that array will migrate to the other member(s) of your production pool. If you have more than one array, and at least the largest array’s worth of free room, then the official way to do a firmware update is to use a “Maintenance Pool”. As most operating systems have a disk timeout value of 30 seconds or longer, no one really notices. This takes something like 27 seconds (26969 ms). Eager readers want to know if EqualLogic array firmware upgrades are non-disruptive, so we tested it in our mighty remote environment! Bottom line: The firmware update process halts I/Os during cut-over from one controller to the other.