We’ve always been a VMWare shop.
As some of you might know, I’ve been involved in the Microsoft virtualisation stack in the past, writing guides for Virtual PC and bits and pieces for the server products, but on the server side of things, at least, VMWare have always had the lead. They’ve had the advantage of beating Microsoft (and others, but this article isn’t about those) to market and the advantage of being able to concentrate on virtualisation because it’s all they do.
HyperV has always been playing catchup to VMWare and in some ways it still is, but when Windows 2012/HyperV v3 was released we finally decided to evaluate HyperV because word on the street was that it was now ‘good enough’.
Of course “Good Enough” is a relative term; what might be good enough for my needs might not be enough for yours, but Microsoft have been bigging up HyperV v3 so we decided to take a look.
We were impressed. For my money, VMWare is still the better solution, and for some workloads it may still be the only viable solution, but HyperV has come on in leaps and bounds since it was first released and now does a very good job of meeting the needs of SME sysadmins, especially those who work in a primarily Microsoft server environment.
HyperV builds on top of the standard Microsoft technology stack; you cluster HyperV servers by using the Windows clustering services, you script it with powershell, you patch it with wsus, and you manage it with SCCM. Specifically System Centre Virtual Machine Manager. Compare this with VSphere, which is all managed through one product, VCenter. Vcenter can be expanded with plugins but fundamentally remains one product and one central management point for your VSphere deployment.
SCCM is a massive product. In fact, it’s a family of products rather than just one thing, and for most businesses it’s not cheap. It’s extremely cheap for education, and a lot of schools and colleges here in the UK that are large enough to benefit from management tools of this kind are choosing SCCM.
This means that if you’ve already implemented SCCM as we have, deciding to implement SCVMM as well is a very cheap option. This is what made us decide to migrate from VSphere to HyperV; we could complete a new install of HyperV onto all our virtual hosts and migrate our current virtual guests from ESX to HyperV for substantially less than renewing our current VSphere licences for another year. This was especially important to us in light of our need to increase virtual host capacity this year, which would again be a lot more cost effective with HyperV than it would have been with VSphere.
At this point we’ve moved all our virtual guests from VSphere to HyperV. This includes Sharepoint servers with no issues with any of the Sharepoint server tiers including the SQL server. Web servers, file servers, database servers have all moved fine. We’ve also moved a couple of DCs without problem though playing virtualisation games with DCs still makes me sweat a little.
The only systems we had problems moving were one or two machines that were P2V’d into ESX where the original physical system used a weird disk config, and the only ones that outright failed were VMWare virtual appliances rather than VMs we’ve built on top of ESXi ourselves. I think that’s a fairly good success rate, though the appliances will be a problem as more and more systems ship like that these days.
SCVMM – the elephant in everyone’s room
SCVMM is where things start to fall apart for Microsoft slightly. It’s not a bad product by any means. In fact it’s got a lot of features, including the ability to manage HyperV, ESX and Xen environments side by side, which makes both migrations and managing mixed environments a lot easier.
The problem with SCVMM is that it’s a little bit more fragile than it ought to be, and everything is just that little bit more harder work than it should be. This means that SCVMM might be the HyperV feature that really helps HyperV eat VMWare’s low to middle-end market away while at the same time being the HyperV feature that’s most despised by the people who actually have to use it day to day.
I’ll give you a few examples from patch management that cropped up for us, along with a solution for those SCVMM users who have been bitten by these ones:
Installing patch/update management in VCenter:
- Choose to install Update Manager during VCenter deployment. IIRC (it’s been a while) this is just another optional component to setup during the VCenter process.
- Run VCenter client. Install the update manager plugin.
Installing patch/update management in HyperV:
- Tell SCVMM where your WSUS server already is, by opening SCVMM, going to Fabric, Clicking on Update Server and adding your WSUS server.
Of course it isn’t that easy. To do that, you need to:
- Install the WSUS console feature on your SCVMM server. (I’m aware that you can just install WSUS on your SCVMM server in full, but Microsoft don’t recommend this)
If you try and do this through the gui on Windows 2012 you will find it tends to have problems running the WSUS post-install tasks, as these assume you have the full WSUS install in place because the WSUS team apparently doesn’t talk to the SCVMM team.
- No matter, just drop into an administrative powershell window and run this command:
Install-WindowsFeature -Name UpdateServices-Ui
- Give your SCVMM management account local admin rights on the WSUS server.
- Stop and restart the SCVMM service on the SCVMM server.
- Open SCVMM, going to Fabric, Clicking on Update Server and add your WSUS server.
- If your WSUS server is shared with your SCCM infrastructure then locate the server in SCVMM Fabric / Update Server, right-click it, select properties and remove the tick from “Allow Update Server configuration changes” then click OK.
Congratulations, you’ve integrated WSUS with SCVMM.
If any of the above steps go wrong, it’s quite possible to get into a state where adding the WSUS server is greyed out as an option in SCVMM but no server is visible in Fabric/Update Server.
To fix this, log onto your SCVMM server’s SQL database, go into management server, locate the database entry for your faulty WSUS server config and delete it. What could be simpler? No. Really. That’s what you need to do.
To verify this, do the following on your SCVMM server (or the server holding its SQL database at least).
- Open SQL Management Studio.
- Expand Databases, find and expand the VirtualManagerDB database.
- Find the table named dbo.tbl_UM_UpdateServer
- Right-click the table name and choose ‘Select top 1000 Rows’.
- A query will be generated in the central management studio window. Click Execute.
- Review the results of the query. If this finds your (mistakenly configured) WSUS server then you have the problem I’m referring to.
- To fix the problem run the following SQL script:
DELETE FROM [VirtualManagerDB].[dbo].[tbl_UM_UpdateServer]
WHERE UpdateServerName = ‘FQDN of your WSUS-Server’
- Restart your SCVMM console and try adding the Update Server again.
Deploying an ‘out of band’ patch.
At this point we were given a hotfix by Microsoft Product Support Services and asked to roll it out to all our HyperV Hosts to fix a problem we were seeing with Exchange 2013.
Hotfixes are outside the normal Microsoft patching regime and as such are not available normally via WSUS.
That’s fine. I get that hotfixes shouldn’t be installed for funzies, so they’re not therefore available via WSUS, but I need a way to update my HyperV clusters with this patch while minimising disruption, and Microsoft’s “Cluster-Aware updating” feature is designed to work with your update server’s patches to allow you to automatically schedule patches for deployment out to hosts in a virtual farm cluster via the usual ‘put host in maintenance mode and quiesce it, update, restart, take out of maintenance mode and move on to next host’ dance… Let’s see now:
How to manually add a patch to VMWare Update Manager:
As per the VMWare blog on patching, open VCenter client, click on update manager icon, select patch repository, click ‘Import Patches’.
How to manually add a patch to Cluster Aware Updating:
Use the CAU Microsoft.HotfixPlugin as documented here.
Work harder, not smarter.
In both cases we see that while it’s possible to do the same things in SCVMM that we could in VCenter, it’s much harder work in SCVMM than it was in VCenter. I don’t intend this to be a love song for VCenter because that has plenty of flaws of its own, but both products reflect a different approach and it shows. VCenter is a single integrated product with a plugin architecture, and SCVMM is a console which attempts to integrate many different Microsoft products and sometimes struggles to do so.
Even with SCVMM’s issues I still think that our move to HyperV was worthwhile, if only for the money we’ve saved. Just don’t make the mistake of thinking it was easy.
Curious to know if you feel the same way in you conclusion statement. Was it still worthwhile.
Hi Jeff, thanks for commenting. On the whole, yes I still think it was worthwhile. But only because we saved such a large amount of money with educational pricing for the MS stack. Things have settled down now that we’re well past the migration stage, but we still find things taking that little bit more effort in SCVMM than they used to in vCenter.
We’re looking forwards to Windows 2012r2 and SCVMM 2012r2 and I’m sure they’ll improve things, but of course its difficult to put a value on hypothetical improvements, isn’t it?
We are not a educational institute so we will not benefit. What version of vpshere were you using? How many hosts/vms did you have in vsphere? What was the push to make the switch? How long did it take to finally switch over? Sorry for all the questions.
No Problem answering questions Jeff, except that I wish that I thought of including quite a few of the answers in the post in the first place ;-).
We were running vSphere 5.1, with about 8 hosts and about 60 or 70 VMs. We had the hosts in two separate farms (based on when we purchased the host servers) and when the hardware in our first VMware farm was due for replacement we took the chance to look at the cost and features vs alternatives. UK education is under a very tight financial squeeze at the moment, in a way that means that while I can usually get the money for planned capital expenditure, ongoing costs for licence/support renewals are a big problem, and we have to look at reducing those where we can.
It probably took about two weeks to make the switch, as follows:
1. Create a new HyperV farm with the new hardware.
2. Migrate VMs from the VMware farm whose hardware we were _keeping_ onto the new HyperV farm.
3. Rebuild that hardware as our 2nd HyperV Farm.
4. Migrate VMs from the VMware farm we were scrapping onto the 1st/new HyperV farm.
Most of the time was spent waiting for large VMs to do a migration from VMware VMs to HyperV VMs. It worked well but was time consuming.
Given the improvements in hardware performance since five years ago, we reduced our number of hosts from 8 overall to 7, while adding about 50% capacity to our virtual farms.
Thanks for your response.
Do you mind sharing the detailed cost of this migration.
Where did you save the most your money ?
Sorry for the delay in replying Will. It’s a long time ago so its hard to put precise figures on it but the only thing that we essentially paid for was the time of myself and one other engineer for the duration of the migration.
Keep in mind this is under educational licence terms so I can’t guarantee this will apply everywhere, but:
– We would need to buy Windows Datacentre licences for each host regardless.
– We had already purchased the full SCCM package so we already had licensed SCVMM as part of that.
– VMWare at the time wanted over £10k per year to renew our current licence and support for VSphere. This is by no means a bad price for what we had, but as you can see, it couldn’t really compete with what we now pay for HyperV and SCVMM.
– We had an over-reaching requirement to drive ongoing costs of software down no matter what, which more than offset the increased effort that we (still, though its improved since 2013) see from working with the MS stack vs. VSphere.
I am wondering what software/technique you used to migrate the VM’s from ESX to Hyper-V? Was this done live? If not, what kind of downtime did you experience on the VM’s?