Sunday, June 22, 2014

One-liner to list names, NAA ids of the boot luns in a cluster

One-liner to list hosts, datastore names, and NAAs of the boot luns in a cluster.
get-cluster "cluster01" | get-vmhost | sort-object name | get-datastore | where-object {$_.CapacityGB -lt 100} | select @{n="host";e={($_ | get-vmhost | select name).name}}, name, @{n="NAA";e={($_ | get-scsilun | select canonicalname).canonicalname}}


Broken down:
  1. Get-cluster "cluster01" - start by getting the cluster
  2. get-vmhost - get all the ESXi hosts in the cluster
  3. sort-object name - optional to sort the hosts by name
  4. get-datastore - get the datastores attached to the hosts
  5. where-object {$_.CapacityGB -lt 10} - filters only the small datastores*.
  6. Select-object - pick the properties to display
  7. @{n="host";e={($_ | get-vmhost | select name).name}} - shows the name of the vmhost
  8. name - shows the name of the datastore object
  9. @{n="NAA";e={($_ | get-scsilun | select canonicalname).canonicalname}} - shows the naa of the datastore
*Adjust this where clause depending on how you can identify the boot luns in your environment. For example, if you name all your boot luns with 'boot' in them, you could use where-object {$_.name -like "*boot*"}. I pick all the small datastores because if the datastore is less than 10GB in our environment, it's always a boot lun.

Not all the fields may display completely on your screen. You can always pipe to out-file to dump the info to a text file or pipe to format-list

Sample output:
host     Name          NAA
----     ----          ---
hostab01 hostab01-boot naa.6005076801818...
hostab02 hostab02-boot naa.6005076801818...
hostab03 hostab03-boot naa.6005076801808...
hostab04 hostab04-boot naa.6005076801808...
hostab05 hostab05-boot naa.6005076801808...

Sunday, June 8, 2014

Storage vMotion just one hard disk using PowerCLI

I was looking for a quick way to storage vMotion some of a VMs hard disks but not all of them. I found a pretty quick one-liner at ict-freak.nl
Get-HardDisk -vm | Where {$_.Name -eq "Hard disk <#>"} | `
% {Set-HardDisk -HardDisk $_ -Datastore "" -Confirm:$false}
However, I found that Set-HardDisk is deprecated. Move-HardDisk is the preferred method. The help file for Move-Hard disk had some simple examples.
$vm = get-vm "MyVM"
$mydisk = $vm | get-harddisk -name "Hard Disk 3"
$mydatastore = get-datastore -name "SAN2"
move-harddisk -harddisk $mydisk -datastore $mydatastore


These can be combined into a one-liner like so:
Move-HardDisk -harddisk (get-vm "MyVM" | get-harddisk -name "Hard Disk 3") `
-datastore (get-datastore -name "SAN2") -confirm:$false -runasync

Worked perfectly for me.

Wednesday, May 14, 2014

I changed my password and now linked mode doesn't work!

My co-worker changed his password today, and suddently when he launched vCenter he couldn't see any of our linked vCenters. He could connect to the linked vCenters just fine and see all of them, but whenever he connected the vCenter client to our main vCenter server it would not connect any of the linked vCenters.
It took a bit of snooping around in security logs in various vCenter servers, but the resolution was to connect to the main vCenter server, look at Home/Administration/Sessions and kill all of my co-worker's sessions. After that, he was able to connect his vCenter client to the main vCenter server and see all of our linked vCenters as well.

Friday, January 31, 2014

The case of the disappearing LUNs!

The case of the disappearing LUNs!

We ran into an interesting situation at work this week. For one cluster of ESXi hosts, LUNs were disappearing seemingly at random. Now, we had been using this cluster as a 'Jump' cluster to facilitate moving from one environment to another, so it had lots and lots of LUNs attached. (162, in fact).

We found that every time we would 'Rescan' for new storage, some LUNs would be discovered and others would disappear. It seemed pretty obvious that we were running into some sort of storage maximums, but we were well below the ESXi published maxiums of 256 attached LUNs and 1024 physical paths.  We had previously run into another issue with VMFS heap, so every host in this cluster already had VMFS3.MaxHeapSizeMB set to its maximum. A real stumper.

A call to VMware and we found that we were, in fact, running into a heap issue, but it was not the VMFS heap that was filling.  The lvmdriver also has a heap setting, which by default is 42000KB (~42MB). This lvmdriver heap was completely full.  From the hosts' consoles, this was easily fixed by running an esxcli-module command.  That's great, but not going to work in an environment with 100s of hosts.  PowerCLI to the rescue!!

For today's fun, I wrote a quick function to get the lvmdriver heap size for ESXi 5.0 hosts.  Since I know the default from the VMware case, if the option in the lvmdriver is not explicitly set then the function returns the default value. 

function Get-lvmDriverHeapSize {
<#
.SYNOPSIS
 Gets the lvmDriverHeapSize configured for a 5.0 host
.DESCRIPTION
 This function will get the lvmDriverHeapSize for a 5.0 host if it
 has been configured.
.NOTES
 Author: Cheryl L. Barnett
 .PARAMETER VMhost
 The host object to check
.EXAMPLE
 PS> Get-lvmDriverHeapSize -VMhost (Get-VMHost Test01.fqdn.local)
.Example
 PS> Get-VMHost Test01.fqdn.local | Get-lvmDriverHeapSize
#>
  param(
      [parameter(valuefrompipeline = $true, mandatory = $true,
     HelpMessage = "Enter a VMHost entity")]
    [VMware.VimAutomation.ViCore.Impl.V1.Inventory.VMHostImpl] `
        $VMhost)
  process{
    #Note that if multiple options are set then this entire function breaks
    if (!($_.extensiondata.config.product.version -match "5.0"))
        {
        Throw "$($VMHost) is not an ESXi 5.0 host! Please use this function for` 5.0 hosts only!"
        }
    $maxHeapSizeString = (get-vmhost $_ | get-vmhostmodule ` lvmdriver).options.tostring()
    if($maxHeapSizeString -match "maxHeapSizeKB=")
        {
        $maxHeapSizeKB = [int]($maxHeapSizeString.split("="))[1]
        }
        else
            {
            $maxHeapSizeKB = 42000
            }
    $maxHeapSizeKB
    }
}


So, with this function you can sweep through your environment relatively quickly and get the configured lvmdriver heap size. Note the caveat in the code - if the lvmdriver has multiple options set then this code will break.  That's not an issue I want to tackle right now, since it's way beyond the scope of what I need to do to fix my problem.  Also, watch the line breaks there. I think I put all the backticks in the right place but no guarantees.

If all goes well, next week I'll have a function to set the lvmdriver heap size.