Quick Note on Always On Availability Groups Performance in SQL Server 2012

It’s still early in the lifecycle of SQL Server 2012 (heck, we just RTM’d 2 weeks ago and not yet GA!) but some of the whitepapers, best practices, guidance, etc. are starting to flow out and I’ll make it a point of mine to try and highlight some of these as they come out. That is, the ones that effect projects or customers that I’m working on, anyway.

So here is one that caught my eye: I saw this on Books on Line about making AlwaysOn Availability Groups perform better in terms of resolving network names and IP addresses with the AG Listener:

Always specify MultiSubnetFailover=True when connecting to a SQL Server 2012 availability group listener or SQL Server 2012 Failover Cluster Instance. MultiSubnetFailover enables faster failover for all Availability Groups and failover cluster instance in SQL Server 2012 and will significantly reduce failover time for single and multi-subnet AlwaysOn topologies. During a multi-subnet failover, the client will attempt connections in parallel. During a subnet failover, SQL Server Native Client will aggressively retry the TCP connection.

I copied that directly from the BOL link above. The bolded areas in that passage are mine and I wanted to point out that it states that adding “MultiSubnetFailover=True” is the recommendation for clients connecting to AG Listeners even in single-subnet AlwaysOn topologies (FCI’s as well). If you scroll down in that full passage, you will also note the bit about OLEDB not supporting the “MultiSubnetFailover keyword”, which gives me the opportunity to again remind everyone that OLEDB for SQL Server client connectivity has been deprecated as of SQL Server 2012.

Resource Governor Updates in SQL Server 2012

When I travel to visit with DBA teams from different customers there are a number of common recurring problems and issues that it seems all SQL Server DBAs are facing as part of their daily routines. It is only natural for a DBA for any data platform to focus on database performance, uptime and business continuity as the 3 main areas of a DBA’s job.

But within database performance and uptime, there is a feature in SQL Server that often do not see utilized enough. That is resource governor. You can search the MSSQLDUDE blog here at WordPress to find my postings about resource governor. One of the best resources in gaining an understanding of resource governor is here at Microsoft.com’s How-To video collection.

There are 2 very important things that resource governor provides a SQL Server DBA that directly help you to produce results day-in and day-out:

1. You can “classify” all incoming connections to your SQL Server by aspects of the connection string from the client like application name, user name, group name, etc. I recommend doing this on your busy production servers even if you don’t intend on necessarily managing resources and throttling those connections. This is because you end-up with a system that can now be managed & monitored more granularly. Instead of monitoring or managing runaway queries or busy connections from a server level, you get a picture of your system based on application, user, group, etc. by Resource Pools which you can see in my perfmon screenshot below:

2. Now that you’ve classified the connections, you can set thresholds on resources and governor your server at a more effective granular level for CPU, memory, MAXDOP and more.

So, what’s new with SQL Server 2012 Resource Governor? There are 3 areas that I think will be very beneficial to DBA when using Resource Governor in 2012 over SQL Server 2008 R2:

  1. Current CPU throttling only kicks-in when there is contention with other busy connections. SQL Server 2012 resource governor allows you to set a hard cap on CPU for each pool
  2. SQL Server 2012 has some changes to the memory manager that now allow resource governor to control memory outside of the query grants, which is all that setting MAX memory in resource governor can control today in SQL Server 2008 R2
  3. You can even now affinitize a resource pool (based on your classifier function) to a scheduler or NUMA node

A Look at 3 Common SQL Server General Performance Tuning Measures

Happy 2012!

I was playing around with a couple of separate blog posting ideas last week as it was a sort of slow week in the office. I was going to link some very good other blogs as well in talking about Lock Pages in Memory, Instant File Initialization and Optimize for Ad-Hoc workloads. As I was drafting these posts, I realized that these all sort of fall into a similar broad category and rather than post them separately, I’ve decided to create a single post that allows you to look at these 3 options together as: “3 general options with SQL Server that MAY improve performance”. These are all either at the SQL Server instance level or server, so I think of these are using a sledgehammer for a problem that may actually require just a tack hammer. In other words: your mileage may vary and you should start your troubleshooting and performance tuning first at a more granular level and work your way up: schema, stored procs, database, instance, server, etc.

Lock Pages in Memory (LPIM)

It is becoming more & more common to find this as a recommendation for all SQL Server workloads. LPIM is a Windows Server level privilege is not enable by default and requires the privilege to be assigned to the SQL Server service account in order for it work. What it does is to tell the OS that it cannot page out data pages from memory to virtual OS memory on disk files, ensuring that SQL Server’s data pages will stay in memory. To set-up this option, see this MSDN page here. While there are instances where setting LPIM can be helpful, on systems where SQL Server is starved for memory, this can stop the OS from performing its normal routines the operation of your server because when it asks the SQL process to page some memory to disk, SQL will not be able to do so. And in virtual environments, make sure that you set the VM’s reservations to match the amount of RAM in the virtual machine configuration for VMWare, while in Hyper-V, LPIM is actually quite helpful for Dynamic Memory.

Instant File Initialization

Similar to LPIM above in the sense that this is actually something that you would set from Windows, not SQL, and that it will also affect the entire server, Instant Initialization is something that can provide benefits immediately to specific operations that require SQL Server to interact heavily with data files. Kimberly Tripp has the best write-up of this feature from SQL Skills here. In her review of Instant Initialize within SQL Server, note that Windows does not need to zero-out database files (this only works on database files) but this simple change in SQL Server’s file handling can result in some dramatic improvements in certain normal SQL Server activities. Notice these measurements from Kimberly:

Performance Test with Zero Initialization
Hardware: Dell Precision 670 Dual Proc (x64) with Dual Core, 4 GB Memory, RAID 1+0 array w/4-142 GB, 15000rpm disks
CREATE DATABASE with 20 GB Data file = 14:02 minutes
ALTER DATABASE BY 10 GB = 7:01 minutes
RESTORE 30 GB DATABASE (EMPTY Backup) = 21:07 minutes
RESTORE 30 GB DATABASE (11GB Backup) = 38:28 minutes

Performance Test with Instant Initialization
Hardware: Dell Precision 670 Dual Proc (x64) with Dual Core, 4 GB Memory, RAID 1+0 array w/4-142 GB, 15000rpm disks
CREATE DATABASE with 20 GB Data file = 1.3 seconds
ALTER DATABASE BY 10 GB = .4 seconds
RESTORE 30 GB DATABASE (EMPTY Backup) = 5 seconds
RESTORE 30 GB DATABASE (11GB Backup) = 19:42 minutes

Read more: http://sqlskills.com/BLOGS/KIMBERLY/post/Instant-Initialization-What-Why-and-How.aspx#ixzz1ihxSDtf5

Optimize for Ad Hoc Workloads

This is an option that you set at the instance level within SQL Server, see here. This one falls a little more, in my mind, into the category of performance tuning applications, T-SQL and plan caching. But from the perspective that this is an option that you can set and try it out by testing, testing, testing, then I felt it was fine to lump into this one post! With database workloads that have a lot of ad-hoc one-off queries run against the server, you can stop plan cache polluting that may be induced by SQL Server when it creates plans for those ad-hoc queries. With the option set to true for “Optimize for Ad Hoc Workloads”, SQL Server will instead create a small stub in the plan cache when the query is run for the first time. Now when that query is run for the 2nd time, then SQL Server will create a fully compiled plan, thinking that is now is valid for reuse. This can help to relieve memory pressure by not allowing the plan cache to become filled up with compiled plans that are not likely to be reused. A common workload where this may be helpful would be a transaction or operational database system for an application that requires ad-hoc real-time queries that are the types of dynamic SQL that does not lend itself well to caching re-usable plans. I would not think this would be very helpful in more predictable query-centric reporting workloads like a data warehouse.

Definitely some ideas and things for you to try out on your SQL Servers. I remember a friend once told me, back when I was a developer, that the best DBAs obsess over and spend too much time, tuning and optimizing their database servers. Hopefully for those of you that fall in that category, that you will find some of these useful.

Best, Mark

SQL Server Virtualization Recommendations on Hyper-V

Here are just a few brief common recommendations that I give SQL Server DBAs when virtualizing SQL Server on Hyper-V. The best complete source of best practices, guidance and recommendations are these 3 Microsoft whitepapers, which I reference nearly every day:

  1. High Performance SQL Server Workload on Hyper-V
  2. Running SQL Server 2008 in Hyper-V Environment
  3. Running SQL Server with Hyper-V Dynamic Memory Best Practices and Considerations

Here is my top-10 list which is based on generalities that I have found give DBAs a decent bang-for-the-buck. As with ANY configuration change, your own mileage will vary and make sure to first TEST, TEST, TEST on a non-production system:

  1. Lock Pages in Memory. In fact, this is quickly becoming a STANDARD recommendation on all SQL Server boxes and VMs that are dedicated as database servers.
  2. Virtual SCSI-attached virtual disks for all data disks give the best performance for SQL Server.
  3. Hyper-V does not over-commit memory and instead uses “dynamic memory”. To make best use of this feature, you should set the total of Startup Memory for all the server’s VMs to a value that is lower than host’s physical memory so that all virtual machines can start in the event of an unplanned failover.
  4. Although you can over-commit CPUs with Hyper-V, try to avoid doing so. Testing has shown that over-committing CPUs has a very heavy burden on overall server performance.
  5. Make sure that the hardware you are using has CPUs that support SLAT. This makes a HUGE difference in VM performance and you must think about this before ordering your server hardware!
  6. Also, on that same note, make sure that your server is outfitted with a >=1GB NIC interface because you’ll need it for Live Migration. Also note that Hyper-V will use DMA for VM memory from your NIC card.
  7. With Hyper-V Live Migration, startup of VMs on another host is much more orderly and better-behaved if you can reduce the the SQL Server buffer pool BEFORE migrating the VM using sp_configure ‘max server memory’.
  8. If running Hyper-V on a NUMA platform, try disabling NUMA “spanning” to ensure that the VM accesses only local node memory.
  9. Don’t start-up your SQL Server VMs with over-loaded resources. Benchmark the CPU & RAM needed for each server before virtualizing. You can always ADD resources later (SQL supports hot-add RAM & CPU).
  10. Because you are adding an abstraction layer (virtualization) to your hardware and will likely begin to exponentially increase the number of SQL Server instances that you monitor and manage within the same or smaller footprint, look at using tools like System Center SCOM and VMM to keep the Hyper-V environment healthy and efficient.

 

 

Configure a SQL Server Standalone Performance Monitoring Instance

SQL Server 2008 R2 Enterprise Edition has enough combined features that you can now centralize monitoring your SQL Server instances from a single point. What this allows you to do is to enable pro-active performance monitoring of your SQL Server infrastructure so that you can identify trends and alerts in common areas that require monitoring such as query performance, server capacity and resource utilization.

This is not meant to replace more feature-rich off-the-shelf products that monitoring the health of SQL Server with specific designs that allow DBAs to take SQL Server performance monitoring to the next level. But that being said, here is a common set of steps that you can take to enable centralized monitoring with SQL Server 2008 R2:

  1. Determine how many instances you are going to monitor centrally from this monitoring SQL Server instance. If the number of instances is greater than 25, then you may want to buy the SQL Server license for this monitoring server as Data Center because the UCP feature which I’ll touch on in this posting, requires Data Center to monitor more than 25 SQL Server instances centrally with UCP. At minimum, you will require Enterprise Edition so that you can use MDW (also explained below) which is required at each monitored instance, too.
  2. The hardware for the monitoring server does not need to be the beefiest of your iron in terms of RAM or CPU. This is a monitoring server, not a database server for OLTP or DW workloads. That being said, you will centralize data collection of performance stats across your instances here, so this server will need disk space.
  3. The out-of-the-box (with SQL Server EE) features that we’ll use to make this work are Utility Control Point (UCP) , Management Data Warehouse (MDW) and Central Management Server (CMS). CMS is available in ALL editions of SQL Server 2008. I’m not going to go any further in this post about multi-server management and central management of your servers. I’ll stay focused on monitoring SQL Server. I’ll leave you with a promise to follow-up with a similar entry on how to MANAGE SQL Server centrally with CMS, DAC & PBM. Let me just say this: centralizing your monitoring on a single instance like this helps to set you up for centralizing management, too. So this becomes another added benefit to this technique and approach.
  4. Let’s start with MDW on your central monitoring instance of SQL Server (2008 R2, of course). In SSMS on the Management folder, right-click on “Data Collection” and select “Configure Management Data Warehouse”. You will run through the wizard here to set up the MDW collection database on your central instance.
  5. Next, configure MDW on each instance that you want to monitor. When you run through the wizard on the monitored instance, you want to set-up the data collectors to collect stats on that instance and send the data to the centralized MDW database. Do not just default to setting up each & every instance of SQL Server to collect and send to MDW. The reasons for this include that not every instance that you have in your environment will be Enterprise Edition and you also need to understand that based on the # of instances & stats you collect and the collection interval will effect the amount of data being stored and retained in the MDW centralized database. This can result in very large MDW databases. Be sure to automate the purge history job for the data warehouse.
  6. Now configure UCP on the central monitoring instance. To get to the configuration tool for UCP, choose View | Utility Explorer in SSMS. You will see the set-up option now in SSMS for UCP. Once you’ve walked through the wizard to set up the local centralized UCP instance, you can now enroll instances into the UCP. This is done from the central server in the Utility Explorer. Select the Managed Instance node, right-click and select Enroll Instance. Be aware that these instances must be SQL Server 2008 R2. In the NOTES section below, I’ll point you to some ways that you can use MDW for SQL Server instances other than 2008 R2.
  7. Lastly, Central Management Server (CMS) will allow you to create groups in the central monitoring server’s SSMS where you can run queries and access the remote instances from a single SSMS instance. Why this is important to centralized performance monitoring is because with CMS, one thing that you can do is to run DMV queries across all of your monitored instances from that single central instance with one single query execution. This will allow you to get results back from a number of instances around missing indexes, long-running queries, buffer size, etc. by running 1 query.

NOTES:

Here are my examples of how to configure MDW to collect through a pull mechanism instead of the classic push data collector mechanism so that you can collect stats from SQL Server instances that are not SQL Server 2008 instances.

Here is where I talk a little bit about actually using these tools on your monitoring instance once you have it up & running.

CLOSED: Free SQL Server Performance Monitoring Seminar in Malvern MTC

**** We had to close registration for this 20-seat seminar as the response was so quick and we filled it up!

**** If you sent me an email and received a response back that you are registered, then we will see you here on June 22 @ 2 PM.

If not, don’t worry … I promise to work to schedule another SQL Server free seminar in the Microsoft Malvern office again very soon.

Thanks! Mark

 

 

 

Are you going to be in the Philly are on June 22 from 2-5 PM? We are hosting Sebastian Meine for a 3-hour complimentary seminar in our Microsoft Technology Center in Malvern, PA. This free training will focus on SQL Server 2008 R2 performance Monitoring and will include snacks & beverages.

If you can join us, that would be great! Just send me an email here. You must register with me in order to attend. Below is the flier for the seminar. Many thanks! Mark

SQL Server Performance Monitoring Basics — Will my SQL Server keep up with the workload of tomorrow?

This class will introduce the concepts and tools to recognize performance trends to identify and address small problems before they become big ones.

To accomplish this we will:

-­‐ introduce the ACIDproperties, focusing on Isolation including locks and latches

-­‐ explore the query lifecycle with query compilation and the three query execution phases

-­‐ investigate blocking, showing ways to monitor for blocking occurrences

-­‐ examine performance-­‐related DMVs

The class  closes out by introducing the Management Data Warehouse, a tool to use on all your servers to monitoring the items covered in this class.

After  attending the class, you will be able to:

-­‐ describe ACID and its components and explain its  impact on concurrency

-­‐ differentiate between locks  and latches

-­‐ explain the query lifecycle

-­‐ identify blocked and blocking sessions and their SQL statements

-­‐ use DMVs to get a picture of your server health

-­‐ monitor  important SQL Server performance metrics over time using the Management Data Warehouse

-­‐ see  important trends in your performance metrics

SQL Server Extended Events

Just a quick shout-out today to MSSQLTIPS for this article today on an introduction to what is, IMO, the most under-rated cool new SQL Server 2008 feature: Extended Events. X-Events is a SQL Server sub-system that is always on and registers information that is available to you as a DBA, about your system. It is a light-weight (in terms of system impact) version of SQL tracing and provides information that you can interogate about what is good and what is bad on your SQL Server. You can even integrate the events with the Windows Server Event Tracing subsytem.

SQL Server DBAs have been using server-side tracing, default trace and client-side tracing with SQL Profiler, for many years. So switching to a new mechanism for SQL Server monitoring and troubleshooting is new and unnatural. But make no mistake, Extended Events is THE way to monitor SQL Server events going forward. Denali will see a big increase and improvement in using X-Events.

Perhaps if SQL Server 2008/R2 had shipped with a nice GUI tool or integrated with SSMS, adoption of extended events would be greater. But you can download and install the free Codeplex add-in for Extended Events which you can get to from the link at the top of my post from the MS SQL Tips article. The free download for the x-events explorer very much simplifies setting-up and querying events. Even so, it is still a learning curve because with extended events in SQL Server, you have to learn about event targets and sessions and how to configure those.

In all honesty, I myself, do not use extended events very much yet. But, I’m no longer a full-time DBA or developer. Now I just try to help those who are in that role! But I do think this can be a very helpful way to get to the bottom of the good & bad of your SQL Server installations.