Weapon of choice?
Recently I’ve been involved in some internal and external discussions about wether or not a XenDesktop deployment should use Provisioning Services (PVS) or Machine Creation Services (MCS). Ever since Citrix released MCS with the release of XenDesktop 5 this debate has been going on and on and on. Even Citrix is divided. Ask a consultant and chances are high he’ll tell you “PVS!” and go to an SE and you should bet on him he saying “MCS!”.
And there lies the root of the problem or at least a very big part of it. Understand the position and what’s at stake for either one of these groups of Citrites or partner engineers and it will make clear what the platform of choice could be. Selling consulting services? Longer engagement is better. Product Pre-Sales? Shorter engagement is better.
Let’s be quite honest: If your job (as a consultant) is to build an infrastructure at a customer site and you’re getting paid by the hour, you are used to a long engagement with a customer. If you (as a Sales Engineer) are trying to show the concept of desktop virtualisation to a customer, you want to use a tool which is lean and mean and saves you the most time so you can get more customers aboard. Don’t get me wrong: this is purely why one group favors PVS more and the other favours MCS more, and helps to explain why there is no single answer if you asked several people. It does not yet say which technology is better. Both groups are not lying to the customer or deceiving them either. They are just using the tools which they are accustomed to and had succes with so there is a natural barrier to switch over. The whole MCS vs PVS debate of course did not exist until Citrix announced XenDesktop 5.0 at Synergy Berlin in 2010.
The whole notion of getting out of your comfort zone and doing something new or different isn’t always easy. I know all too well now that I’m at Nutanix. There will be loads of nay-sayers.
But since PVS had being doing the rounds successfully for quite some time already, why did we see this shift in preference towards MCS slowly developing within Citrix from an sales (engineer) standpoint?
Because it was simpler. Problem in the beginning however was that MCS was not yet a viable solution when you needed to scale larger environments because it relied on the storage layer. MCS was simply ahead of it’s time and the industry was just starting to acknowledge the VDI IOPS problem and build solutions to solve it. Storage was a huge bottleneck and IOPS didn’t come cheap in 2010.
Now some of you might think “hey, that dude now works for Nutanix, so he wants to ship his stuff to solve the IOPS issue instead of solving it with PVS and the new RAM overlflow function” but you could not be more wrong.
Why? I was an SE. What do SE’s want? Simplicity. What do customers need and expect? Exactly.
Besides: solving IOPS is not the main idea behind Nutanix, nor is it the single trick we put on the table. We are here to make the entire datacenter simpler. Not just your VDI environment. We take the pain away that comes with managing these types of environments.
Yes, IOPS are an important architectural hurdle to overcome in any virtualisation project, especially when unleashing sheer amounts of virtual desktops onto an already existing infrastructure. It will show you the weaknesses of that existing infra really quickly. However solving IOPS should never ever create an additional management overhead in that infrastructure. Placing bandaids or shifting the load somewhere else does not solve the problem we are trying to address in the first place which is getting to an environment that not only performs well, but is also scalable, reliable, manageable, cost effective and as such: SIMPLE.
To be honest: MCS made it significantly easier (and faster) for me to win projects while I was with Citrix because it made stuff simpler. Customers love that!
A trip down memory lane..
Rewind a couple years to Christmas time, 2006. Citrix announces the acquisition of a company name Ardence. This company from Waltham, Massachusetts was already in business for a long time (since 1980) and had a really cool product which would allow you to boot physical pc’s over a network, streaming a single image from a server to multiple machines at the same time. A for that time not completely new concept (who remembers the Novell bootrom enabled nic era?) but very innovative because it allowed for the streaming of Windows as an operating system.
The first version of Ardence (4.1) under the flag of Citrix was primarily focused on delivering Presentation Server workloads to bare metal, but it was originally designed for a pc use case. The acquisition was a very good one and a natural fit in any Presentation Server environment as Server Based Computing was rapidly growing and a more central way of managing physical servers was needed. Remember that even though we had VMware already around, in 2006 it’s hypervisor was not very capable of virtualising SBC workloads due to the massive overhead and decrease in user density and as such not deemed good enough. Of course that changed later and virtualising SBC workloads is common practice nowadays.
The product Ardence got renamed to Provisioning Server and later Provisioning Services, and was slowly evolving into a more datacenter focused approach by adding features and bundling it in with the Platinum edition of Presentation Server / XenApp 4 and later also into XenDesktop editions. It’s stand alone Desktop only version was eventually pulled and bundled completely into XenDesktop with some restricted use rights based on edition.
PVS under the hood.
The way Ardence/PVS worked was to intercept the local disk on a really low level and redirect it over a network to a server. In essence it was a form of storage virtualisation and using PXE boot technology allowed the pc’s to be completely diskless. A single image could be used to stream to multiple PC’s at the same time, based on associating MAC addresses with a vDisk. The vDisk needed to be made from a master pc, converted into a vDisk with a special tool, and then put onto the server. On the server side you could then set the image mode to “standard” which allowed you do stream 1-to-many. Intelligence within the software made the installed operating systems unique by managing the host naming and AD memberships so you would have no duplicate machines on the network.
PVS depended heavily on a proper network, DHCP setup providing the correct bootfile name and server. Since Windows is of course not a read-only O/S you had to choose one of several options of placing a write-cache, which would be flushed at the next reboot keeping your machines O/S in a pristine order.
A classic video demonstrates the process to a pc environment:
Now this previous paragraph was mostly written in the past-tense, but actually almost everything I described there is still needed today; the vDisk conversion, the network setup, the server setup, write cache management, everything.
Fast forward a couple of years to Synergy Berlin, 2010. Citrix announces the upcoming release of XenDesktop 5.0, the first push towards a more simple, scalable and agile architecture which would lead to the end of the IMA architecture. The older XenDesktop versions prior to 5.x were still based on IMA, and this proved to be a huge problem in large deals because IMA was meant to run in XenApp environments were a 1000 servers was more or less the limit but this was a sight that was not seen much anyway so not deemed a problem. Now with XenDesktop, customers tried to run 1000’s of desktops with XenDesktop and they hit the IMA limits hard. So a new architecture was needed, and so FMA, the Flexcast Management Architecture, was born.
FMA was at first only available for VDI workloads as XenApp was still a separate product and would continue to have another 3 full versions (5.0, the dreaded 6.0 and the final 6.5) based on IMA. Only with the release of XenDesktop 7.x did the XenApp workload make its way to FMA, first as a Hosted Shared Desktop option in XD, later brought back as XenApp 7.5 for only that specific workload.
When XD 5.0 was released, also Machine Creation Services became available, and it’s design was focused on simplifying XenDesktop setups. The competition was in that time very eager to point out how complex XenDesktop was to setup, and to be frank: those claims had merit. Although it was not as complicated as some of the competition would make it seem, setting up a complete XenDesktop environment, especially when mixed with XenApp was quite a big task and took multiple days to complete.
So the first step towards simplicity for Citrix was to ease the time to setup. The broker installation was drastically simplified, as was rolling out a group of desktop VM’s. I could now do the entire setup of a XenDesktop farm in just a couple of hours and have plenty of time to drink coffee while doing it.
MCS under the hood.
The biggest installation hurdle that was taken away was that MCS was built right into the solution. There was no separate server setup for PVS, no network configuration to be done, no lengthy image conversion process, and less consoles to work with. It was bliss 🙂
MCS itself works by linking VM’s to a single disk image. Now don’t confuse this with linked clones of VMware’s products. A lot of just-in-time identity tech was put into the product to not have to sysprep or do any other nasty time consuming recomposing stuff to get your VM’s booted up. It was quite similar to PVS in that respect, except for the complexity.
Each MCS booted VM reads from the master image, based on a snapshot of a master VM, which is then copied to all configured datastores in the XenDesktop host connection configuration. Every time you deploy a new image to a machine catalog, the broker makes a snapshot if it doesn’t already exist, and then starts a full image copy to each datastore, and that could be shared or local storage.
Because we still have the principle of a read only image, we need to separate the writes. Instead of using the term Write Cache as in PVS, MCS uses a Diff Disk, and an Identity Disk per VM. All writes are placed into the Diff Disk, which can take up any size depending on the number of writes that occur, but the Identity disk is a read-only 16 MB disk which just has some info about the name and AD account of the VM, which is ingested at boottime to make the VM unique. The whole use of Diff and ID disk is hidden from the admin, and there is no option to select where to place them like in PVS (i.e. on local disk, RAM or on the server), other than the datastore on which your VM’s are created, nor is there an option to specify how large they can become.
The only exception to this is the addition of the Personal vDisk, which can be set to a specific size and placed on a separate datastore. PVD is something that works for both PVS and MCS and I’m not going to discuss that further in this blog.
The good, the bad and the ugly..
Not all was 100% super duper when MCS was released.
A drawback of the MCS approach is that the more datastores you use, the longer the copying part of the rollout takes. You could put it on your SAN, and just use one LUN but you would have to accommodate for all write IOPS being redirected back to your SAN, which is troublesome in many occasions, hence the reason everyone is looking at alternative approaches leveraging local storage solutions for example.
I’ve already written about how you can simplify MCS datastore usage to solve this problem using Nutanix as underlying infrastructure:
Getting back to the essence of MCS: the updating mechanism itself is very easy; just boot the master VM, do your updates, shut it down and start the rollout. There is no need to do a vDisk conversion, or write enable a previously created vDisk. Also there is no danger of screwing up the NIC binding order by accidentally ingesting a NIC driver update through Windows update which could leave your VM unbootable. And every, I repeat, EVERY PVS admin has had the dreaded “Inaccessible Boot Device” – blue screen after an image update. Sometimes the only way to update a PVS vDisk is to redo the whole image build process.
Now the copying process of MCS can be troublesome with multiple datastores, but how about PVS? PVS is even more complicated because first of all you have to determine the amount of PVS servers needed to stream to your VM’s. For a couple of hundred VM’s you might get away with 2 or 3, but for a serious enterprise deployment you are looking at tens of servers. And each of these servers needs to be able to read the vDisk you created for your VM’s, so you get the additional design challenge of how to distribute the vDisk to all those servers. There are many approaches to it, be it a simple CIFS share (but don’t forget to tune the OpLocks, set the Stream Service accounts etc….), to localising the vDisk by copying it everytime you update (PowerShell…), or using DFS or external clustered filesystem solutions (Sanbolic, Melio) etc etc.
How scalable is that solution? How easy is it to manage and how much time will an image update cost me? How about the diskspace involved?
Send in the clones!
An estimate number of PVS servers for a 30000 seat VDI site is around 30 servers. Physical PVS servers are claimed to scale to 5000 VM’s in benchmark situations, but I’ve never seen that being deployed. A more reasonable number would be 1000 VM’s per server because you also need to take into account that one or more might fail and you need extra capacity for these occasions.
Failover for desktops in case of a PVS server failure is automatic, but not instant. There will be a significant delay during the switching over (or rebalancing) of the VM’s. And did you know you can not have more than 4 PVS servers specified to make the bootfile high available?
The next thing you need to take into account is that PVS uses a SQL database to hold all the site information. So add another design to your task list. Oh, and the database needs to be always available, which will be out of sync with the database requirements for the XenDesktop site itself in the near future (Local Host cache anyone? 😉 ) : http://support.citrix.com/proddocs/topic/provisioning-7/pvs-ha-db-mirror.html
Now here comes the next fun part: maintaining your PVS servers. PVS is a separate piece of software with it’s updates being released mostly out of sync with XenDesktop releases. PVS updates are almost always full server reinstalls and require server reboots (or at a minimum the restart of the Streaming service) and database updates. There is no centralised update management in PVS. Because the Target Device agent is also updated with server updates most of the time, you need to rebuild your vDisk(s) as you can’t update the agent while booted in Write Mode, so I hope you kept your original VM in sync with your vDisk updates or are prepared to do a reverse imaging procedure.
Last but not least: PVS tuning. There are more tuning dials in the PVS console than in the cockpit of a Boeing 747. And they are not highly descriptive or obvious in their functionality or effect either.
And the annoying part is that there might be more tuning needed on the VM level as well as on the network.
Do or do not, there is no retry?
One of the most well known issues with PVS is the dreaded excessive retries problem. In short this is the problem where there is a disturbance between the Target Device and the PVS server Stream Service. Something causes the stream of packets to be disturbed, so a retry is sent. This will immediately result in a performance degradation. Now a single retry is not something very noticeable and it’s ok to have some retries during the day, but sometimes the situation can escalate into excessive retries (i.e. more than 60 in a minute) and this will kill performance instantly.
There are numerous reasons why this can occur, from NIC drivers that are faulty, NIC drivers that need to be tuned (disabling Task offloading etc), to bad cabling, switch configurations etc. The problem is hard to troubleshoot and like the Inaccessible Boot Device BSOD almost every PVS admin has had to deal with them somewhere down the road. There are numerous blogposts describing possible solutions to the problem, but trial and error is more or less the end result when you’ve tried everything in there.
It’s time to let go…
The only thing in my mind PVS has still going for it, is that it can do streaming of physical workloads. But how many of us do really still want to manage physical hardware? Do we really want to set the clock back to 2004? PVS started out as a great solution but time and other technology has caught up with it.
I think it’s time to really start appreciating what XenDesktop and MCS can do for us today. It’s radically simplified approach, and the availability of having Flexcast complete in one product is awesome, and it blows anything VMware currently does in Horizon out of the water. PVS is still a bolt on technology, with a limited integration into the GUI. It just has too many things you need to take care of, which are completely gone, hidden or solved with MCS.
Scalability wise MCS is dependent on the XenDesktop Controllers. A single controller can easily handle 20000 desktops. The storage layer takes care of the rest. To get similar scale with PVS you would need to add 20-25 servers to the infrastructure.
Using PVS as a way to solve deficiencies or performance issues in the underlying storage infrastructure will only get you short term results. Citrix positions the new Write Cache in RAM with overflow to disk as a way to save you from the high cost of shared storage or solutions in that space, but does it really save you money? Remember that the function was not even designed to do what it does now. It was designed to prevent VM’s from blue screening when the RAM cache was full. It was merely a safeguard for inadequate memory sizing or preventing rogue processes from filling the RAM. When your physical disk space runs out, it will still blue screen. How about the side effect of the write cache doubling to tripling in size when you turn the overlflow feature on because it will now write in 2 MB blocks? Can you accommodate for that?
What about the pain that comes with building and managing PVS? Why reintroduce complexity when there is such a great simple solution built right into the heart of XenDesktop that can be used for all types of virtual desktop workloads? PVS does not do anything for persistent desktops which are popping up more and more. MCS, with the release of XD 7.x now also does your 2008 and 2012 Hosted Shared Desktop workloads with the same ease as you rollout your VDI environment with. If you switch to XenDesktop 7.x, coming from XenApp 6.5, be sure to explore the MCS option. It saves you a lot of headaches.
I just can’t quite understand why Citrix creates such a cool new piece of technology on with MCS one side, and then puts PVS on extended life support to stretch it’s life just a little longer.
Scalability is only just one of the many aspects of a solid XenDesktop design. Being able to manage it in a simple way, and removing complexity is much more important.
It’s time to let go.
(big thanks to David Gaunt for proof reading and correcting my fat fingering. Read his ramblings at http://nutanixnoob.com)