Ever since Citrix released XenDesktop 5.0 we have had the debate of PVS vs MCS, or Provisioning Services vs Machine Creation Services. When I was back at Citrix we (the Systems Engineers) always had this discussion with the CCS guys (Citrix Consulting Services).
Consulting was highly in favour of PVS as it was a very proven technology, and the SE’s always pushed for MCS because it could simplify VDI PoC’s and remove a couple of pieces of infra. Both techniques are great, but each has it’s own positives and negatives.
I always loved PVS as an SE, but tried to do PoC’s with MCS as much as I could. However MCS has always been way more heavier on its IOPS consumption than PVS and since an existing SAN can’t keep up with the high demand of IO when releasing the magnitude of a VDI environment onto it, the only logical way to deploy MCS at scale was to use local storage so either put in SSD’s or FusionIO in every host, and configure the XenDesktop Broker to use al the individual local datastores.
The way MCS works, when used in a non-persistent way (as most Citrix customers use), is that the broker will copy the master image to each configured datastore specified in the host connection. This can either be a local datastore on each host, or a shared datastore.The available datastores are read from the hypervisor cluster for the admin to select.
After this copy is complete (this can take a while depending on the number of datastores configured), all the VM’s in the Catalog are then pointed to these local copies. Each VM will also have it’s own Identity Disk and Diff disk which it is linked to.
While this approach works well with local storage, the way it has to be configured is a bit cumbersome.
First of all you have to configure all the local datastores first. Spacewise, you would separate the boot partition of your hypervisor from the actual storage of the VM’s. When all the hosts have been added to the hypervisor cluster, you would then go into Citrix Studio and create a host connection. When you create a host connection, you point it to, for example, vCenter, select the cluster, select the networks for your VM’s, and then you make the choice to use local or shared datastores. When you click local you then have to select all the stores available on the hosts. Each and every one.
In the above example you see the step of the wizard in which you configure the storage. For sake of a screenshot demo I selected the local storage of all the nodes in my Nutanix cluster, but that’s not something you would do in a real environment. It’s just to show you how it would look when selecting your datastores. In a 20 host environment you would click 20 separate datastores. Annoyingly, there is no ‘select all’ option so my sympathies to all those admins that use 40-50 host VDI environments ;), because you will first be selecting all those datastores and then have to wait for the copying process each time you rollout a new image.
Now when using Nutanix as your virtualisation infrastructure platform, this whole process becomes insanely simple. Since we use local storage and we pool this into a virtual filesystem which we then present back to all the hosts in the cluster, we can have the benefits of the speed of local storage, and the simplicity of having shared storage. For this we also bring a feature called “Shadow Clones” to the table. More on this in a bit.
So what we do in Studio when running in Nutanix is just select the cluster mounted datastore you created in Prism to store your MCS golden image, and all the VM diffdisks etc. And that’s just one click.
From now on, every time you rollout a new master image version, the copying only needs to be done once. And that will go to local storage first, so the proces is very fast. Then the VM’s on all the hosts will be reconfigured to point to the new master image and start to read from that.
Now it’s time for Shadow Clones to shine. Shadow Clones is a mechanism which, when enabled, automatically detects if a single virtual disk file is being used by multiple VM’s. Once it sees that, it will mark the disk read only and copy the disk to all the local hosts in the cluster, so eventually the image will still end up locally, despite it being configured as shared storage.
It also solves the problem of one host not being available for whatever reason, and the admin forgets to deselect it in the Studio host connection configuration prior to rolling out a new image. Normally your rollout would fail when Studio tries to copy the master image to that one host that’s not available. All ok when it’s the first in the list, kinda sucky if it’s one of the last, because the entire operation will fail and you have to do it all over again.
Using Nutanix under your XenDesktop environment will really make MCS shine. First of all because of the great performance of the nodes, but secondly because the image rollout will be much faster (just one copy), and thirdly it will become much simpler in terms of configuration and management.
I call that a Win-Win-Win.
p.s. I now truly love MCS, especially because you can now also use it with Hosted Shared mode