Sunday 21 February 2016

Docker-machine and VMware vSphere Clusters, Lessons Learned

I have been doing a bit of work with various flavours of containers lately which comes with the highs and lows of being a technologist. One thing that was interesting is that with the recent updates to docker-machine, my setup for binding and deploying to our vSphere environment no longer worked.

This issue presented itself when attempting to provision a new instance with the following error:

Running pre-create checks...

Error with pre-create check: "default host resolves to multiple instances, please specify"

Digging deeper and I found out that where I was using the cli argument / variable 'VSPHERE_COMPUTE_IP' to specify a ESXi host to bind to, you know need to use 'VSPHERE_HOSTSYSTEM' (or as a cli argument '--vmwarevsphere_hostsystem'). As this is new documentation is fairly light as are real examples. What the documentation does provide though is two syntax examples as follows:

for cluster -                      VSPHERE_HOSTSYSTEM=<Cluster Name>/
for stand-alone host         VSPHERE_HOSTSYSTEM=<Cluster Name>/*

Using those two syntax examples in the lab I first attempted 'VSPHERE_HOSTSYSTEM="UCS8#General Use/' but this resulted in the same error message. It was then also notable that examples were pushing to single node clusters, the test cluster I was targeting has 4 nodes.

Anyway to cut straight to it, the documentation is a little incomplete. You do need to still target a specific host in the cluster and docker-machine will not do that for you. As such in a cluster with multiple hosts the actual syntax is:

VSPHERE_HOSTSYSTEM=<Cluster Name>/<Host Name>

 so for my setup it looks like this

VSPHERE_HOSTSYSTEM="UCS8#General Use/esxrack01.vce.asc" 

I guess while I am here another hint does not hurt. The other piece of lovely syntax requirements is the optional variable for targeting a specific ResourcePool, to do this you use the variable VSPHERE_POOL. The syntax for this is

VSPHERE_POOL="/<DataCentre>/host/<Cluster>/Resources/<Resource Pool>/..."

So again in our lab setup the variable looks like this:

VSPHERE_POOL="/ASC/host/UCS8#General Use/Resources/Containers" 

Docker-Machine and vSphere together are a great team and well worth a play so hope this helps. 

I am also fortunate with doing a bit of work with VMware's vSphere Integrated Containers (VIC) which takes these issues away and provides a very strong alternative to other scheduling/clustering services such as DOCKER SWARM within your vSphere environment but ore on that in another post later.

Thursday 18 February 2016

What is VMware VSAN 6.2 Erasure Coding?

I have been asked what is the 'Erasure Coding' support in VSAN all about as it gets referenced in the same sentences as the new support for RAID5 and RAID6 configurations (only available in all flash configurations).

VMware is stating Erasure Coding as a term to reference for any data partitioning scheme that allows data to be fragmented but recoverable in case of failure even if some parts  are missing (use of parity). In general reference they are referring to the support of Erasure Coding to represent the fact that they have moved from a mirroring scheme to support both RAID 5 (support for 3+1) and RAID 6 (Support for 4+2) to increase the level of usable data from RAW Capacity across nodes.  

Prior releases only supported a 1+1+witness or worst (1+n where n=failure tolerance but each n was a mirror copy) scheme hence you always needed a minimum 3 nodes to start with but only really got 50% usable capacity at best. This is why the Erasure Code support is getting such big focus as it makes VSAN more efficient with physical resources.

Worth noting that Erasure Coding is not referring to a disk configuration as with RAID but how the fragments are distributed between hosts. It is for this reason that depending on the choice mode also drives the minimum number of hosts required (3 for standard setup, 4 for R5 and 6 for R6).

Wednesday 17 February 2016

So Who Does Use Linked vCenters?

OK, I am sure that most onboard and I put up my hand, I personally am a fan. When I talk to a lot of clients though, generally I find that most aren't.


Two linked vCenters in Web Client
In the old days we linked the instances together with the linked mode utility (or unlinked it with the same tool). Nice and easy and all of a sudden security and licensing information is shared between vCenter instances. The big gain though was that through the C# client (oh I do miss your usefulness sometimes in this depleting role world) I had a one stop shop to access up to 12 vCenters. Now it is even easier, just have the vCenters in a common Platform Service Controller (PSC) SSO implementation (common site/ multi site doesn't matter).

This in turn enables some very cool features beyond the original benefits. The one that drive me to link mine up was the ability to enable cross vcenter migrations. Being someone that is fortunate to have access to a well equiped lab I have enabled this with VPLEX for the stretched storage and use both SRM 6.1 and vMotion in vCenter to control the migrations (more on that in another blog post I think). 

This blog is not to tell you how to link, that is covered in a lot of other blogs written by very smart people. More I want to draw attention to issues that surface a lot with the newer Enhanced Link Mode with those add-on services that use the web client.

As I have just experienced an issue again I am writing this while it is fresh in the mind. What I see is that with a lot of product teams that do include Web Client plug-ins, there seems to be quite an absence of testing vCenters in Enhanced Linked Mode. Have you ever had this style of conversation:

me)              I am <inset problem description here> in the web client 
developer)   what version are you running
me)              The one listed in requirements
developer)   Are you using the Windows Version of vCenter or the Appliance?
me)              I am using the (whatever) in linked mode
developer)   ohh, we haven't tested that

Anyway some gotchas I seem to experience include:

  • Only one vCenter is supported in the linked set for the plug-in
  • Only the first vCenter is visible to the plug-in
  • Plug-in is unable to differentiate (filter) objects out that are only present within the aligned vCenter (e.g. all datastores across all vCenters are displayed)
So the point of this entry beyond the rant is just to bring attention to the fact that there are some considerations when writing a plug-in for linked vCenters. If you are running into issues and you have got linked vCenters I would suggest that the first step should be test against a stand-alone vCenter. This helps isolate the issue and can help when you are talking to the support/product/vendor teams.



Welcome VxRail, VCE's Entry into the HCIA Market


You want the ease of a Hyperconverged appliance but still have all those advanced capabilities that service enterprise data centres, then the new VCE VxRail is for! This is not so much an evolution of the VMware EVO:Rail appliances but more a revolution.

Why do I say that? The Achilles heal of EVO:Rail was the lack of flexibility, all you could do was chose between two RAM configurations (no variant in disk or CPU), 128GB or 196GB which for some reason had the 196GB model referred to as the desktop service model... never quite got my head around that one, why wouldn't other services want a larger memory footprint? The other side was the fact that you could only grow your appliance service out by adding additional appliances, start with a 4 node 2U appliance and then grow in increments of 4 from there till you hit the 64 node ceiling (aligned to the maximum size for a HA / VSAN cluster).

Now think the VCE VxRail appliance here we have flexibility that can meet a huge range of use cases, your choices now include:


  • Starting point is now 3 nodes (today) and growth can then be in single node increments up to the 64 node maximum
  • CPU model options provide a range of 6 cores (single socket) through to 28 cores (dual socket) per node
  • Memory configurations from 64GB through to 512GB
  • Disk configurations of Hybrid (Flash + Spinning HDD) or all flash ranging from 3.6TB through to 19TB per node 
  • Even networking options will support 1G connections on the low end nodes with the standard being dual 10G 
You can also now mix different model configurations and EVO:Rail appliances under the same management. Phew... all that flexibility but with the ease of use that comes with the VxRail appliance.

Some other cool features of note are things such as:
  • All the coolness of VSAN 6.2 with Dedupe, compression, flash optimisation, erasure coding, snapshots, cross site HA (yes you can link two remote appliances into a vSphere Metro Cluster)
  • VM Level replication with EMC RecoverPoint/VM (yep included) 
  • Cloud based storage tiering EMC Cloud Array (yep also included)
  • VCE Vision software for service health and interoperabilty compliance monitoring and reporting
  • vRealize LogInsight for lag aggregation and analytics
  • vSphere Data Protection (EMC Avamar)
  • Appliance Market Place, your VxRail App Store for easy request / enablement of additional services
And that is not the end of it, too much to list :)

If you want to have a look for yourself have a look at this demo of the VxRail Manager I recorded!




More information is available at http://www.vce.com/products/hyper-converged/vxrail