SQL Server 2012 will have Distributed Replay

What I have experienced as a Microsoft Data Platform TSP in the past several weeks since the 2011 PASS Summit is that most of the discussion, testing, excitement and energy around the new SQL Server version (now SQL Server 2012, previously SQL Server Code-Named Denali) has centered around the big-ticket items:

  1. Always On
  2. Power View (formerly Project Crescent)
  3. Column-based Indexes
  4. SQL Appliances
  5. SQL Server Developer Tools (Visual Studio Integration formerly Project Juneau)
  6. Pricing Changes

I’ve written in this blog (you can search this from the WordPress search bar at top) about other “2nd-tier” updates to the SQL Server platform that will ship with the GA of SQL Server 2012 in early 2012 such as Sequence Objects.

I want to introduce you to another nice new feature in SQL Server that is something that I have heard customers ask about for a while now: Distributed Replay. This is the ability to capture workload from multiple SQL Servers from an application perspectives and be able to replay those for development & testing purposes. It makes sense that this is a feature that took some time to make its way natively into the product. In the past, I have seen some very clever and effective solutions that customers have wired together using downloaded 3rd party tools, open source generators and replaying SQL Profiler traces.

Those work well enough. But now you have a tool that can capture workload from more than 1 SQL Server at a time and to have a tool that can play those transactions back with different load targets. This will be very helpful for regression testing, load testing and staging among other areas.

Jonathan Kehayias at SQL Skills put up a blog on how to get in installed and configured (there is a server & a client component) and I like James Serra’s write up as an overview to learn a little bit more.

Advertisements

Add SSDT to Your SQL Server Vernacular

SQL Server professionals know all the acronyms and abbreviations in SQL Server land: SSIS, SSRS, SSAS, SSMS, TSQL, BIDS, so on and so on … In the next version of SQL Server, codename “Denali”, there is a new term to be familiar with, particularly for SQL developers and SQL Server database schema developers: SSDT or SQL Server Developer Tools. This is not a replacement for BI Development Studio (BIDS). But you should note that in Denali, all Visual Studio development (BI & database) uses Visual Studio 2010 (VS2010) now, not Visual Studio 2008. You won’t need to separately license VS2010 unless you wish to use some of the more advanced features in Visual Studio and the development team features. But a new set of database development templates are made available with SSDT in Denali and they are closely associated with the preferred database access techniques in .NET development circles, known as Entity Framework, or EF together with LINQ:

When you use Visual Studio to design your SQL Server database schema, you get many of the features available in Visual Studio that .NET developers have also enjoyed such as refactoring and renaming all with Intellisense coding for T-SQL:

Now, when you code your application, you can build the schema directly with Entity Framework and also have the option of keeping the EF model synched up with the database model should you modify the schema directly from Visual Studio instead:

This is all available to you now as part of CTP3 of Denali, which you can download and try out here. This tight integration with EF and Visual Studio 2010 for database projects is what was called “Project Juneau”. I suggest that you go here to learn much, much more about SSDT and Project Juneau. Best, Mark

 

SQL Server Denali: Which Features Work on Windows Server Core?

I cannot tell you how many times that DBAs have asked me when SQL Server will support Windows Server Core version of the OS. With Server Core, you essentially interact with the OS similar to Linux or Unix with a shell prompt as opposed to the UI-centric model that is familiar to Windows users. This greatly minimizes the server attack surface and also can greatly reduce required patching. These are HUGE advantages for SQL Server DBAs.

SQL Server 2008 R2 does not support installing on Windows Server Core. However, the next version (Denali) of SQL Server will. I just want to point you to a very helpful posting on BOL that gives you guidance on what features of SQL Server will NOT function when installed on Server Core. The link is here. Note that most GUI-centric tools are not available. But the services outside of the core SQL Server engine that most operations require ARE available like Integration Services and Analysis Services.

SQL Server Denali Deprecated Features List

I’ve fielded a few questions this week about the updated list of features that will be removed from SQL Server with the Denali release and that will be deprecated as well. So I thought I’d just put up a post with the MSDN link so that you can bookmark this and check back for updates as we move closer to the GA release: http://msdn.microsoft.com/en-us/library/ms143729(v=SQL.110).aspx. I would imagine that this list will evolve as the next CTPs are made available.

SQL Server Maximum Availability: Denali & Private Cloud ?

As I am working my way through the latest updates in the betas of SQL Server Denali, I’ve been thinking about different architectures to maximize database and application availability with the new SQL Server platform. It is important to note that one the most hyped, popular and intriguing architectures for database and applications, both from my interactions with customers and Redmond, is utilizing what we call “Private Cloud”.

I’ve written about a SQL Server private cloud before in this blog and I often refer to it as “optimized data center” for customers. The reason for this is that for some, it can be confusing when software vendors are talking about public clouds like Azure, Amazon, RightScale, etc. or virutalized local environments in your data center. Private cloud is taking the concepts of Public Cloud like elastic scale, metered billing, self-provisioning, consolidated servers, maximum utilization of infrastructure to maximize, ROI, etc. and making those concepts viable in your own data centers.

SQL Server Denali introduces an update to database mirroring that combines the Microsoft technologies of mirroring, clustering and replication into a very simple easy-to-configure and maintain set of “availability groups”. With Denali, you can take databases that are part of an application, put them into this availability group and even have multiple replicas (aka database mirroring with multiple read-only partners) that you can report from or back-up from.

My thinking is that combining these 2 technologies will give the SQL Server DBA community 2 key things that were either not available before in SQL Server or are better now with this architecture:

  1. Combining applications into availability groups that allow me to move reporting and backups completely off the primary production database and to the replicas.
  2. Create a SQL Server farm where I can use the Microsoft System Center suite for server load balance, full server density, Live Migration for zero downtime patching and managing my instances as SQL Server virtual machines.

The slide below is from a presentation that you can download from the Microsoft SQL Server private cloud link that I included below. This architecture works with SQL Server Denali since I would configure availability groups for my application databases using AlwaysOn database replicas, which require Windows Cluster Server as shown below. The virtualization command and control software (Hyper-V and VMM in this case) will allow me generate a SQL Server farm where I can move the instances around from server to server, using Live Migration. So while HADR provides the tools for protection against failures and guest-level patching, Live Migration and VMM give me the ability to load balance, utilize full server resources and move running instances from server to server with no downtime. The assumption in this architecture, BTW, is that I am running a single SQL Server instance per VM, so that when I say “move an instance” in this grid architecture, I am moving a full guest OS VM.

Go here for more details on SQL Server Denali. And here is the Microsoft SQL Server private cloud home page.

Your thoughts and suggestions are very much welcome in the comments. Thanks! Mark

UPDATE: The SQLOS Team blog on SQLBlog added an excellent post that you should bookmark for SQL Server Private Cloud background info: http://sqlblog.com/blogs/sqlos_team/archive/2011/06/08/sql-server-virutlization-consolidation-and-private-cloud-resources.aspx

SQL 11 Analysis Services Cool New Features

The SQL Server Analysis Services team just added a posting from the PASS Summit here. When you click over, you’ll see some of the cool new additions to SSAS announced @ PASS this week that will be part of SQL 11 (Denali). Peronally, I excited to try out the new semantic data model (a nice addition to the existing UDM), DAX from PowerPivot now available in BIDS and the vertipaq storage option added to SSAS. Enjoy! I’ll add more as I come across the news …