OK, I know, I am waaaay behind on the next parts of my series on System Center for SQL Server. I promise to continue with SCCM and VMM in the coming weeks. Promise!
That being said, I want to put up a tiny post today on this topic. What I am describing in this series is a data center environment where you are leveraging Microsoft’s best-in-class tools to provide a smooth-running operation where you are monitoring, managing and tuning your SQL Server footprint with minimal wasted effort and time, while maximizing your staff time and your hardware investments.
Seeing that the current trend in IT data centers is toward virtualization to achieve that server hardware optimization, Microsoft has worked with HP to put out a consolidation appliance that is built just for SQL Server database consolidation: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/Appliances/HP-dca.aspx.
This is a hardware rack solution optimized for virtualizing your SQL Server footprint on a Windows Server Hyper-V environment. And it comes installed and licensed (included with the purchase price) with SQL Server and the full System Center Suite that I have been explaining to you in this System Center for the SQL Server DBA series.
A pre-built optimized HP rack with SQL Server & System Center may not be the answer for everyone. But if you are reading this series because you would like to consider SQL Server consolidation on Hyper-V, managing your complex SQL Server environment with Virtual Machine Manager, monitoring, configuration management, etc. the Database Consolidation Appliance is a good option for you to consider. This entire “private cloud” or “optimized data center” environment with SQL Server & System Center that I am describing in this series is already installed, configured and rack mounted for you!
Here are just a few brief common recommendations that I give SQL Server DBAs when virtualizing SQL Server on Hyper-V. The best complete source of best practices, guidance and recommendations are these 3 Microsoft whitepapers, which I reference nearly every day:
- High Performance SQL Server Workload on Hyper-V
- Running SQL Server 2008 in Hyper-V Environment
- Running SQL Server with Hyper-V Dynamic Memory Best Practices and Considerations
Here is my top-10 list which is based on generalities that I have found give DBAs a decent bang-for-the-buck. As with ANY configuration change, your own mileage will vary and make sure to first TEST, TEST, TEST on a non-production system:
- Lock Pages in Memory. In fact, this is quickly becoming a STANDARD recommendation on all SQL Server boxes and VMs that are dedicated as database servers.
- Virtual SCSI-attached virtual disks for all data disks give the best performance for SQL Server.
- Hyper-V does not over-commit memory and instead uses “dynamic memory”. To make best use of this feature, you should set the total of Startup Memory for all the server’s VMs to a value that is lower than host’s physical memory so that all virtual machines can start in the event of an unplanned failover.
- Although you can over-commit CPUs with Hyper-V, try to avoid doing so. Testing has shown that over-committing CPUs has a very heavy burden on overall server performance.
- Make sure that the hardware you are using has CPUs that support SLAT. This makes a HUGE difference in VM performance and you must think about this before ordering your server hardware!
- Also, on that same note, make sure that your server is outfitted with a >=1GB NIC interface because you’ll need it for Live Migration. Also note that Hyper-V will use DMA for VM memory from your NIC card.
- With Hyper-V Live Migration, startup of VMs on another host is much more orderly and better-behaved if you can reduce the the SQL Server buffer pool BEFORE migrating the VM using sp_configure ‘max server memory’.
- If running Hyper-V on a NUMA platform, try disabling NUMA “spanning” to ensure that the VM accesses only local node memory.
- Don’t start-up your SQL Server VMs with over-loaded resources. Benchmark the CPU & RAM needed for each server before virtualizing. You can always ADD resources later (SQL supports hot-add RAM & CPU).
- Because you are adding an abstraction layer (virtualization) to your hardware and will likely begin to exponentially increase the number of SQL Server instances that you monitor and manage within the same or smaller footprint, look at using tools like System Center SCOM and VMM to keep the Hyper-V environment healthy and efficient.
I was working with a partner architect last week on a ideas for a SQL Server architecture that would best fit a large customer that we are working with. We both were starting from the same place in our architecture discussion of private cloud with automated database provisioning massive server consolidation.
What was interesting is that we both called this “private cloud”, yet he was assuming no virtualization – just automate the provisioning based on application catogorization and consolidate under specific SQL Serverinstances. I had the same ideas, but ASSUMED virtualization.
The moral of this story, for me anyway, is not get caught into thinking too black box, that is, that to achieve many of the same benefits of a virtualized private cloud, that you must fully adopt virtualization. Now that being said, I prefer VMs as a consolidation practices generally speaking, because of the OS isolation and elastic scale.
But a key think to remember is that you can still take advantage of overall data center automation with private cloud on bare metal database instances, not just virtualized. I was sent a link to this Charles Joy demonstration of using the beta of System Center’s new Orchestrator (formerly Opalis) which is automating SQL Server. So certainly VMs are not mandatory for many of the private cloud benefits.
UPDATE: I just wanted to clarify what I mean above by “categorization”. When consolidating servers, to take best advantage of different hardware and networks and utilize the most expensive and fastest assets for the most appropriate purposes, you should classify databases & applications in accordance with their business requirements as opposed to simply putting systems on machines that were pegged for that purpose. The classic taxonomy is silver, gold, platinum with different SLAs with RPO and RTO measurements. For example, 24 hour SLAs with 24 hours for resolution and 24 hours of data retrieval would probably fall into a silver category. While each level increases the responsiveness of the SLA, the corresponding RPO & RTO requirements mean that ultimately, you will end up with the most critical business systems residing on the most expensive and top-of-the-line equipment and staff. This is where part of the business value and ROI of server consolidation can be found whether you are creating a private cloud on bare metal or virtualized infrastructures.
What to call it? Personally, I don’t care for calling virtualized data centers “Private Cloud”. I prefer “Data Center Optimization”. And when you go beyond simply virtualizing SQL Server instances into automation, provisioning templates, self-servicing, billing, etc. you’ve definitely implemented a more optimized, streamlined data center.
This is what is being called a “Private Cloud”. But that is a fluffy term that does not say anything about the optimization of the data center or of the database.
Anyway, I continued this conversation through a series of responses to the online editor at Search SQL Server here.
I am currently working on 2 projects, one is a proof of concept and the other is an on-going, 2-year project by one of our largest Microsoft customers here on the East Coast. In both cases, these customers are implementing what Microsoft and the IT industry is referring to these days as “private cloud”. I’m not sure that I feel that term is 100% a good fit:
First, when people hear “Cloud” today, they immediately think of public Internet-based Cloud Computing. Private Cloud is based on local on-premises infrastructure for the most part. It is a reconfiguring of your data center practices and infrastructure to create an agile, cost-effective factory that can quickly provision and expand or collapse capacity (elastic) based on end-user (customer) demand. Some of the features that would constitute a “private cloud” will be listed below. Self-service, metered billing and virtualized workloads are key to private cloud, too.
Second, it says very little about what it actually does. “Cloud” is an overloaded and ill-defined term in general right now. That being said, I don’t think I have a better term for it yet, so I’m just throwing stones! Typically when talking to IT shops about comprehensive data center efficiencies such as “Private Cloud”, we will discuss “Optimized Infrastructure”. But I think that terminology also falls short of what is being proposed in Private Clouds.
That being said, let me take a few minutes of your time to quickly lay-out what “private cloud” means in the context of this blog, SQL Server databases, and then link you to further reading to provide deep-dive detail into each area:
- Deploy applications and databases as virtual machines
- Utilize commodity hardware and load-balance VMs
- Provide self-service portals to allow end-users (customers) to request new, expanded or smaller databases
- Constantly monitor server & DB usage and sizes and dynamically (automatically) resize and migrate databases to least-used servers
- No idle stand-by-only servers
- Implement workflow to approve user requests and kick-off provisioning scripts
- Automatically provision users & databases from scripting (PowerShell)
This is the Microsoft Self-Service Portal home page, here is the Microsoft Virtual Machine Manager, the SCOM monitoring tools to enable a fully Microsoft-enabled private cloud. Notice this is not a lot of SQL Server database-centric material there. Private Cloud is an infrastructure to enable flexibility and elasticity to your environment.