Thursday 6 November 2014

Phew, vForum Day One Done!


Well just a quick one as thought it was time for a new entry and having just done day one at vForum at Luna Park it was a good chance for an update. I had the great oppurtunity to present in the EMC session today with Matt Zwolenski, our ANZ SE leader on some of our cool software based solutions.

This 40 minute section covered 4 demos (maybe to many for 40 minutes) that we produced out of the local solution center. I think it could be a week before my sleep catches up for that one. It is interesting just how much time filling a 40 minute segment can take up in preparation. This is amplified by my belief that if I am going to show it, I need to build it and prove it first. All demos I show I also buil and produce myself. Maybe that explains the dodgy Camtasia call-outs.


That said we showed:

  • The ViPR Data services
    • This is very relevant for VMware as it is now providing the Object Storage Services and the snpashot repository for the Database-as-a-Service offerings in vCloud Air
  • VVols with the EMC VNXe
    • A demo showing the 4 parts of implementing Virtial Volumes 1) Protocol Endpoint 2) Storage Provider, 3) Storage Containers 4) Storage Policies
  • Recoverpoint for Virtual Machines
    • Very cool replication technology now available at a VM granular level
  • The VMware + EMC Hybrid Cloud
    • Showed the vRealize suite servicing Virtual Machines,catalogues (vRealize Automation), VM mobility (vCloud Connector), Cost transparency (vRealize Business),  Storage-as-a-Service (vRealize Automation + ViPR)
As a techie I really do enjoy building this stuff and the technology from both VMware and EMC is very cool. It is hard to put a prep time on the work to produce the demonstrations but it was stretched over weeks. With 3 of the 4 solutions shown, done with release levels still in beta or earlier it did bring on some interesting challenges.

It does go a long way to show how far these technologies have come. The old days of any service management solution being an endless drag on professional services are definitely starting to move behind us.

Anyway day one done, one speaking slot and a live vBronwbag podcast completed. Once the vForum roadshow completes in ANZ I will publish the 4 demos into Youtube.
 

Friday 6 June 2014

Cool Solution# Hadoop File Services with EMC Isilon

I'm Sold!

First let me say, by my own admittance, I am an infrastructure guy! That said,  I have been lucky enough recently to be given the chance to dive into the world of Big Data and PaaS. As a techie the extensive technology options in this area are very impressive and as a consequence, I have had a lot of fun combined with late nights and heavy learning.

I have always worked on the KISS principle 'Keep It Simple Stupid!' and Hadoop first into that with the exception of its file services HDFS. By having that layer of abstraction to the file system the ability to manage and populate the file system is non existent without specific tools written for the task.

I know DAS and scale-out through the data nodes is great and builds a big pond for your big data by combining the compute and disk resources of a 1000+ server nodes . Yay all good! But putting my EMC and Infrastructure hat on for a moment what about the following:


  1. How do I backup my HDFS based data
  2. How do you use those other cool storage capabilities such as snapshots, auto-tierring, etc
  3. How do I get real-time analysis on data without having to move it into HDFS
  4. How could you share the data from within HDFS
  5. What if you need more compute resources in your cluster but not storage, or the other way around. 
This is where EMCs Isilon comes to the rescue through its OneFS. The flexible open access into its large single name space is invaluable. All the joys you expect from a true scale-out, enterprise class storage array are there as well as the capability of accessing the same file system in multiple ways at the same time. Think about a web site logging to an NFS mount that is part of the HDFS file system allowing for realtime analytics against it! 

Want to see this in action, look at this demo http://www.youtube.com/watch?v=Qx9BMzZa8UI
 
Further information can be found at http://www.emc.com/domains/isilon/index.htm

EMC ESI with Microsoft Applications

EMCs ESI, it's FREE!!!!

A few weeks back I recorded a series of short videos that showcase the ease of which you can extend your applications direct from EMCs FREE Storage Integration Suite (ESI). These are short sub 90 second videos that each focus on only one particular feature provided through the ESI console.  As part of our application stack value prop this is a great example of what EMC can provide for a customer and of course, it is FREE!

ESI is like EMCs glue into the Microsoft ecosystem and includes:

  • Simple user console for storage, replication and application management
  • Support for Windows, HyperV, vSphere and XenServer
  • Powershell library with almost 200 cmdlets
  • System Center Orchestrator Integration Pack
  • System Center Operations Manager Management Pack
  • Hyper-V VSS Provider and associated PowerShell library
  • Support for Exchange (up to 2013) including Native and Third party replication (enabled by RecoverPoint)
  • Support for Sharepoint and SQL Server (SQL AlwaysOn and FC coming in a few weeks)
Storage Provisioning with ESI

Microsoft Exchange Discovery with ESI

Microsoft Exchange Database Provisioning with ESI

Microsoft Exchange Database Replication with ESI

Microsoft SharePoint Content Provisioning with ESI


Tuesday 3 June 2014

It's Not Just About Backup!

Why Have Different Levels of Protection?

I like the saying that comes when people refer to an old classic car 'they don't make them like that anymore'. The thing is, the statement should be followed with the statement 'thank goodness'. Can you imagine putting up with that level of reliability, quality, safety, comfort level with a car you purchased from a dealer today that you would have got in years gone by?

Same goes with technology. I still wake-up some nights and shudder when remembering the old tape backup routine. This use to go in two ways, the first was the nightly scheduled tape shuffle which thankfully I was not tasked with. The other was the pre-rollout backup we would run before pushing any new service out. This use to comprise us kicking the job off, then trotting down the road to a golf driving range to knock a few buckets of balls into no mans land to kill the time (usually hours).

So looking at that, we had two different use cases but only one method at our disposal, the inglorious, forever unreliable tape backup. Fast forward 15 years (maybe more but I keep that quiet) and we have many different forms of protection at our disposal including storage replication, snapshotting, tape backup (but hopefully VTL, not those horrible tapes).

All these options serve a purpose and work together nicely. If we were to classify the different use cases into buckets they could be:

Business Continuity (DR) = Remote Replication, constant protection driven by RPO and RTO requirements

Operational Protection = Local Snapshotting (or continuous protection as is provided with RecoverPoint) typically done by set interval (storage level) or self service (VM level)

Long Term Retention = Traditional Backup style typically run once a day

Each of these has there purpose and specific business requirements that drive their implementation. Interestingly I see a lot of the BCM and Long Term Retention being positioned but little of the Operational Protection being catered for. Back to my scenario at the start, if I could of had snapshotting at my disposal (or if I was real lucky, EMC RecoverPoint Continuous Data Protection) my golf practice (aka time wasting) would not have happened as we could have snapshotted the services that we were updating and started the rollout straight away.

Self service snapshotting is easily available these days thanks to two things:


  1. Virtualisation with the VM level snapshotting (Checkpointing in a HyperV world)
  2. Automation tools for storage (EMCs Storage Integration Suite is a good example)


So that is all cool for those known events but what about those unknown such as data corruption. That is what scheduled snapshots protect you against, providing a much more granular way to protect your systems beyond the typical 24 hour cycle of Long Term Retention. You may only retain a short set of snapshots such as 1-7 days but they can provide good peace of mind for those services deemed worthy.

I do realise that replication can also provide a way of rolling back but typically it is 'to the last change' or committed IO operation, so a corruption could easily by on the remote site as well as the source. Also replication would require that traffic to come back down the wire which adds time to the recovery / rollback process.

Another benefit of Operational Protection is that it can provide an easy / quick way for copies of  datasets such as those within a database to be presented to an alternate location such as a from production to a test/dev instance.

Anyway, I got that of my chest so I feel better. Operation Protection = Good!

Just as a last note on this, I did not include High Availability (HA) in this as I am more looking at where the old functions we used tape for have evolved. There is some real cool stuff that can be done with stretched high availability that spans physical locations as is supported with vSphere Metro Storage Clusters and Microsofts Failover Clustering with products such as EMCs VPLEX, but that is a big enough topic on its own.



Thin Provisioning with VAAI and VASA in vSphere Working Together

That Damn Pesky Thin Provisioned Threshold Alarm in vCenter


Have you ever had those alarms go off in vCenter telling you that your thin LUN has exceeded its consumed space threshold? This is an interesting repercussion of VASA (storage awareness reporting)   feeding its view of the thin provisioned LUN back to vCenter and then it reacting to it.

So first let's have a look at the issue. Thin provisioning in a block storage world tends to start small and then continue to grow until the configured upper limit is reached. Makes sense and it is what you would expect, as data matures the efficiencies are reduced with Thin Provisioning. In the wonderful world of virtualisation where VMs tend to be fairly fluid due to provisioning, deletion, moving activities a LUN can be full one moment and half empty the next. Add vSphere Storage Clusters and SDRS and it does not even need manual intervention.

Anyway back to the problem at hand, the thin datastore all of a sudden looks full so you shuffle stuff around and clean up space. All good at the datastore level but you are still getting alarms off the datastore, specifically the 'Thin-provisioned volume capacity threshold exceeded' alarm. The reason behind this is that once a block has been written to the array does not know how it is being utilised so reports it as consumed.

This is what VAAI Unmap is all about, giving the host a way of clearing a previously written to block in a way that the array can act on. The drawback on this is that there VAAI unmap is still not an automated process so needs to be executed through the CLI (esxcli storage vmfs unman <datastore>) or via PowerShell as a manual process.

Be aware though as this only returns space back to the array that has been released at the datastore level such as what would happen if you deleted a VM. There is also the in-guest space issue, a thin provisioned VMDK will also grow in much the same way that a datastore does. You can also return space back that has been deleted to within the guest through tools that need to be run within the guest such as Microsoft's SDELETE (http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx) or VMware's own Guest-Reclaim tool (https://labs.vmware.com/flings/guest-reclaim).

By getting both your in-guest and ESXi host level space reclaim strategy in place you can ensure that VAAI + VASA = Happy House and you are getting the most efficiency out of your storage (at the space level anyway).

EMC and Microsoft Applications Working Together

EMC Storage Integration Suite


I recently recorded a number of demo videos showing off some of the cool capabilities of the EMCs ESI. These videos show simple storage provisioning activities but aligned to applications such as Microsoft Exchange and SharePoint.

ESI is a very cool tool that is available from EMC for FREE (yes you read that correctly) and provides a rich set of capabilities including:

  • Simple user console for storage, replication and application management
  • Support for Windows, HyperV, vSphere and XenServer
  • Powershell library with almost 200 cmdlets
  • System Center Orchestrator Integration Pack
  • System Center Operations Manager Management Pack
  • Hyper-V VSS Provider and associated PowerShell library
  • Support for Exchange (up to 2013) including Native and Third party replication (enabled by RecoverPoint)
  • Support for Sharepoint and SQL Server

Demos:

Storage provisioning with EMCs Storage Integration Suite

EMC Storage Integration with Exchange using ESI

Exchange Database Provisioning with EMCs ESI

Exchange Database Replication Enabled with EMCs ESI

SharePoint Content Database Provisioning with EMCs ESI

Product Page:
http://www.emc.com/data-center-management/storage-integrator-for-windows-suite.htm

Thursday 27 February 2014

SCVMM 2012R2 Powershell Cmdlet Changes

Damn, SCVMM Not Supported


Funny, but sometimes changes occur with very little warning and SCVMM 2012R2 has had some that I keep tripping over. Within the fantastic EMC Solution Center in Sydney, Australia which I am fortunate to have a great collection of toys (oops, I meant IT equipment) to play with (again... I meant to say develop and test against) I have come across this issue a few times but find it very difficult to dig up information related back to it.

It relates to the extensive and must say, fantastic set of PowerShell cmdlets that come with System Center Virtual Machine Manager. Firstly the initial changes were very obvious, 2008->2012 the cmdlets started to prefix the command with 'SC', An example of this is that Get-VMMServer became Get-SCVMMServer.

This in itself makes absolute sense, you need a unique identifier when your environment could have multiple libraries that may provide the same cmdlet such as happens with a cmdlet like Get-VM which is supported by vSphere, SCVMM and Hyper-V Powershell libraries so no problem there. What did happen though is Microsoft, I assume to soften the impact, did support the old cmdlets via aliases. These aliases were retired with the release for 2012R2 which I also fully understand as you need to move on eventually, so whats the problem?

The issue is that not everyone has moved onto the cmdlets even though they have had a good chunk of time. My guess this may be because they didn't need to change so why bother, until now. Anyway I have been caught out with this issue with products from both Microsoft (e.g. Team Found Server's Lab Center) and VMware (vCloud Automation Center).

So if you find something is not supporting SCVMM 2012R2 but did 2012SP1, this is not a bad place to start looking at when trying to determine why.

Cloud + SDDC, Now it Makes Sense!

So It Is Not Just Fluff!


Funny, for years now I have been I have been talking, presenting and building Cloud based services, in the early days I just did not realise it. It was at VMworld in 2009 that the term Cloud first got passed around in a way that every vendor seemed to be jumping on it and attaching it their product lines. The reality is though was that there was no clear definition of what was a Cloud.

So what is a Cloud Service, firstly it needs the following

  • Automation: delivery and lifecycle management needs to be automated which in turn, done properly, reduces risk and increases agility. Delivering a Virtual Machine to a client in 7 days (or even a day) does not represent what a Cloud is meant to provide the customers. This has a preceding activity, processes need to be well defined, no use automating something to be delivered as-a-Service without knowing all the speeds and feeds that relate to it. Lifecycle is also critical, need to ensure that you don't end up with VM sprawl.
  • Assurance: There needs to be a way that the administrators and customers are aware of the integrity and performance of the service. This directly feeds into operational health and performance monitoring, chargeback/showback facility and service assignment states. A customer and administrator need to know how well a service is run, how much is consumed and how compliant it is to the SLAs assigned to it.
  • Self Service: There needs to be an electronic shop front to the services on offer. A cloud is consumed on demand as is the norm now for the general tablet and smart phone community as provided by Google Play, Apple and Microsoft Stores. This in turn leverages automation to provide the requested service. Can you imaging grabbing an app from your iphones stores and then waiting a month for it to be available :)
So interestingly there have been a number of products available that were either the good friend of the consultants who then were charged with the implementation of them, or just not good enough to be taken seriously when looking at your Data Center, really just pimped up provisioning engines. 

This is where the Software Defined Data Center comes in and all the related services. The SDDC to me is providing two critical functions
  1. The enablement of automation
  2. The extensibility of the capabilities of existing physical assets, think what VMware has done to your servers with ESX but across the rest of your DC assets including networking and storage 
 Looking at Infrastructure-as-a-Service, the two primary go to solutions are vCloud Automation Center (VCAC) from VMware and the new kid on the block, the Windows Azure Pack (WAP) from Microsoft. Both of which are positioned as platform neutral and providing the shop front to your DC services. The each have good and bad points, I will look to cover them in future posts.  

You also the SDDC enabling solutions coming in thick and fast, think VMware with NSX for networking, EMC with ViPR for storage, Microsofts all Infrastructure embracing System Center supporting automation of bare metal hosts, virtual machines, networks through the virtual networking stack and storage management.

The point is, in the past 1000s of hours in scripting and coding was required to enable a cloud (it kept me busy so I didn't mind myself), now it is becoming reachable with the push towards the SDDC and the maturing and simplification of the service management solutions. 

What makes me think this, well it use to take months to get a service up and running for POC following the fundamentals of the Cloud as listed above. I would also spend a fair bit of time deep in some form of IDE cutting code. 

Now I find myself complaining when it takes me a few days to build one up from scratch or POC in a few days. 

A Long Time Between Posts



Blogging is an interesting activity, with stars in my eyes I committed to starting this blog as a place to record my activities in the world of virtualisation. What I did find though as I spent so much time focused internally within EMC and on our customers that the blog was initially postponed and eventually, forgotten.

Now here we are 3 years after the first entry and I am committed to starting it again. In the interim I did have my friend Scott Drummonds post a few of my activities within his blog vPivot but even he has now re-focused on new exciting streams an has also left EMC so back to doing this myself :)

Anyway as one of the early vSpecialists (VMware) and now an mSpecialist (Microsoft) I work day in, day out within EMCs cloud based solutions covering both VMware and Microsoft platforms. As a consequence I get my hands dirty with both which allows me to get to know the good, the bad and the ugly of both.

Hopefully this time I can keep it up to date. :)