In the last few months I have been struggling to understand what is so different, in terms of mass adoption, between virtualizing server workloads and virtualizing desktop workloads (also known as “VDI” or “Virtual Desktop Infrastructure”). I have been exposed to this phenomenon of x86 virtualization since around 2000 where the idea was as simple as taking a high end server and miniaturizing it into many small virtual servers. Similarly I have been exposed for the last 3 years to the other big use-case for x86 virtualization which is “Desktop Virtualization” and I can tell you that the time it took for the first traditional use-case to take off (through seeding the market with the idea – piloting and proofs of concept – mass adoption) was way shorter than the time it is taking for VDI to take off (going through the same phases above). This doesn’t mean that VDI is not taking off but there are no doubts that after 3 years from introduction I have seen so many more production implementation of VMware ESX than I have seen of VDI.
Why is that? Isn’t VDI just virtualizing XP rather than Windows Server? Well not quite I would say. Let’s dig into some of the details (not in strict order of importance).
– Desktop Virtualization alternatives. While I am focusing this discussion on the VDI concept there are some analysts that, for good reasons, are implying that desktop virtualization is not just VDI (i.e. virtualizing Windows XP and putting it on a server in the back). There are other alternative architectures to “virtualize a desktop” such as Windows Terminal Services, Application Virtualization, OS streaming and many others. To complicate things further these technologies are sometimes complementary to each other and sometimes alternative to each other. So customers are challenged since the beginning of the potential desktop virtualization project with a great deal of input and information that they find hard to understand and digest. In the server space this has never been a great deal since “virtualizing a server” has always had a single meaning that was that of “hardware virtualization” (i.e. getting as many virtual hardware partitions as possible out of a single physical server). So in the server virtualization realm the confusion was far less than the one that is being created nowadays given all the potential architectures at the very different layers of the desktop software stack (and VDI is just one of these different architectures).
– VDI Products complexity. On top of the above complexity there is another one. In fact 8 years ago it was much easier to understand the products you needed to adopt a server virtualization model. If you used to buy 20 physical servers and install 20 Windows instances, now with server virtualization you would buy 2 physical servers, 2 VMware ESX 1.x licenses and install 20 Windows instances. As easy as it is. You couldn’t do much differently and it worked great (so why bother?). VMware has since introduced new versions of the software and enriched their value proposition “linearly” with Virtual Center 1.x and eventually with VI3. On the other hand to adopt a desktop virtualization model you have to buy a virtualization platform, a connection broker, and you need to decide which access device you want to use etc etc. For every single layer of the architecture you have multiple implementations which translate into multiple different products that are supposed to do similar things (if you want to know more about the architecture of VDI have a look at this presentation). As a result in the last few years this desktop virtualization market has been very “foaming” with ISV’s entering into this space and ISV’s buying out other ISV’s etc etc. Clearly it is much more difficult right now to understand what do to and which ISV to buy from a VDI solution than it was 8 years ago for a customer interested in entering the server virtualization space.
– Overall cost of the solution. In the desktop space there is a predominant metric that is “cost per seat” that you can hardly find in the server space. Sure customers understand that a server virtualization solution could cost slightly more than a traditional layout of a string of small physical servers but apparently they are more ready to discuss the benefits (in terms of TCO) of a virtualized solution and factor them in into the overall costs. This is especially true when these customers are considering high-availability solutions and disaster recovery that are either very expensive in the standard physical space or not achievable at all. On the other hand the “cost of the desktop” is a very strong metric that most customers are using when discussing the overall costs of a desktop virtualization solution. A couple of days ago I met with a customer that, as part of a very large bid, was buying (branded and good quality) desktops for 233€ (monitor and Windows license included). Needless to say that in a VDI solution which comprises the back end-servers, the virtualization software, the proper Microsoft licenses, the connection broker software, the thin clients and the miscellaneous utilities you might want to use to complement the scenario, the cost per user might be VERY WELL above that 233€. While for a server virtualization scenario the overall acquisition price of the solution can get close to what a customer would pay for a standard physical deployment (or at least within a reasonable range that is off-set by the tremendous advantages), to create a business case for VDI you have to include a detailed TCO analysis to get on pair with a standard desktop deployment. And we all know how difficult it is to “sell” on TCO (especially to desktops buyers).
– Microsoft licensing. Of particular importance is the issue of MS licensing. Historically, customers have always bought Windows PC’s and historically these Windows PC’s have come with a so called (very cheap) OEM Windows license (that is, when you buy a PC you get a Windows license tied to it). This OEM license CANNOT be used in a VDI scenario so you need to buy brand new licenses. And this is where the “fun” starts. This is a very bad story for customers both from a complexity perspective as well as from a cost perspective. At the time of this writing Windows licensing for virtual desktops is still pretty confusing: “should I buy a retail version of the OS?”, “Should I buy the VECD (Vista Enterprise Centrlized Desktop) license under Software Assurance?”, “What if I am not a customer with MS Software Assurance?” etc etc. All in all whatever you decide fits best your scenario as a customer, it’s going to be more expensive than the cheap OEM Windows license you used to buy tied to your desktops purchase. We all hope MS will make this transition easier for our customers but so far … not so good.
– End-user Experience. There is a big difference between virtualizing a server and virtualizing a desktop from an end-user perspective. You, as a CIO / Sys Admin, can virtualize a server or even the whole server farm and no one at your company would even notice it. It’s just your own decision to do that or not to. In a desktop virtualization scenario, as soon as you start deploying the first thin client you are opening it up to the whole company. Immediately you have exposed your decision to dozens / hundreds / thousands of other individuals that, for good reasons or political reasons, will start to challenge you. Good reasons might be technical limitations that you have to compromise with as of today, limitations for which a thin client can sometimes hardly cope, in terms of local device attachment support / multimedia video performance / flexibility / off-line capabilities etc etc, with a standard desktop deployment. I can assure you that no single “average end-user” would ever realize that their mail system in the back is now running on a vm whereas yesterday it was physical; however even the more “IT-candid end-user” would understand that he / she is using Outlook from a “little box where I cannot even attach my iPOD anymore” as opposed to the PC he / she was used to! And there is when political problems start. On this I have always said that a very happy Sys Admin has a frustrated end-user base and, viceversa, a very frustrated Sys Admin has a happy end-user base. It’s a matter of compromising as usual: VDI technology advancements will allow the CIO / Sys Admin to provide the standard business requirements whereas end-users will need to understand that they can’t just see their business access device as if it was their home PC.
I think these are some of the major road-blocks for VDI to become really true and start the massive deployment we have seen in the traditional server virtualization use-case. All in all I think that the root of the problems when trying to re-architect the desktop deployments is that, whatever you do, it’s basically a “hack”. If you think about that for a minute, the WHOLE industry only has one default that is “the end-user will be using a Windows desktop”. Whatever you do with any technology that the industry is creating (be it an application, a physical USB device or whatever) to make it work in a different scenario… it is a “hack”. We have implemented hacks with Terminal Servers and we are doing the same with VDI and any other technology such as Application Virtualization. As long as there is an industry that creates “stuff” for the PC and there is just a handful of people that try to make the “stuff” work differently in a different scenario … it will always be an up-hill. I look forward to the day when the industy as a whole will embrace these non-PC deployments in a more structured way than the current “I’ll do this assuming the PC and then someone will be able to hack it to make it work for alternative scenarios”. I look forward to the day when the average “CIO Joe” that needs to create an IT infrastructure will not only think “I have 1000 users, I have to buy 1000 Windows PC’s” but rather … “I have 1000 users, I need to buy a VDI solution for them”.
At that point all these things such as products and architecture complexity, end-user experience, licensing issues etc etc will fall apart… because it has become the “obvious / default” way to give end-users access to IT.