Introduction to SQL Server 2012 AlwaysOn Availability Groups – Q & A

Last week, I presented my session “Introduction to SQL Server 2012 AlwaysOn Availability Groups” to my largest audience ever at the PASS DBA Fundamentals Virtual Chapter. There were 191 total attendees, and I would like to take a moment to thank all of you for attending, it was truly AWESOME! Also, I would like to take a moment to apologize for the audio issues that occurred throughout the session. This was primarily my fault, as I had joined the webinar twice, once as a presenter by phone with a high-quality headset and good quality audio connection, and another time as an attendee just to keep an eye on what all of you were seeing. Unfortunately, the attendee laptop was somehow selected as the microphone to be used while I presented from my actual presenter laptop, and that is why the audio kept fading in and out and was poor quality. Mark, Mike, and I met to discuss this and how to prevent it in the future, and so this should not happen again.

Anyway, I received several questions during this session that I wanted to address via this blog post, as I think they could benefit everyone. So without further delay, here they are:

  • What would you recommend for the maximum number of (practical) databases per Availability Group?
    • This will depend on the hardware you’re running on (specifically the number of CPU threads that the primary replica can support), and the network bandwidth available between your primary and secondary replicas. Also, the amount of transactions per second occurring in each database will be a factor in this. There are no hard-and-fast rules about how many databases can be in the Availability Group. Please see for Microsoft’s recommendations in this regard.
  • How do Availability Groups work with CDC?
  • If an Availability Group is setup at the SQL Instance level, can you have multiple SQL instances per cluster node and have an Active-Active configuration?
    • First of all, an Availability Groups is not the same as a Failover Cluster Instance. An Availability Group is a group of 1 or more databases that all failover together and share a set of replicas onto which that failover may occur. Each replica is another SQL Server instance that sits on another node of the same Windows Server Failover Cluster that the primary replica does. With that said, an Availability Group can only have replicas that are nodes of the same Windows Server Failover Cluster. Therefore, active/active in an Availability Group would be more a question about which replicas are readable or not and not so much about running multiple Availability Groups. Additionally, an Availability Group is not an instance-level failover (like in a Failover Cluster Instance), so things like the master and MSDB databases, logins, etc. do not failover in an Availability Group. You can have multiple Availability Groups running at the same time, but keep in mind that they would all need to be sitting on nodes of the same Windows Server Failover Cluster, and only one instance of SQL Server per cluster node can participate in Availability Groups due to the coordination between the Availability Groups and their underlying Windows Server Failover Cluster. To clarify that a bit, you cannot install 2 SQL Server instances to the same Windows Server Failover Cluster node and have one instance host a replica for on Availability Group and the other instance host a replica for a different Availability Group. Instead, you would have a single SQL Server Instance on the Windows Server Failover Cluster node that would participate in both of the Availability Groups.
  • Is it possible to set this up with demo licenses? Is there a temp/demo/developer clustering license available from Microsoft? (for those of us on the bench who would like to test this)
    • Absolutely! Microsoft offers an evaluation version of SQL Server 2012, which can be downloaded from and used to test AlwaysOn Availability Groups. In addition, if you already have SQL Server Developer Edition licenses, you can use those licenses to test AlwaysOn Availability Groups (you just can’t use them in any production capacity).
  • Can you select which databases are in the Availability Group? Can you have two different databases from two different servers?
    • Yes, you can select which databases are part of the Availability Group. However, any database that is part of the Availability Group will need to be present on the primary replica and then synchronized to all secondary replicas. Therefore, if your primary replica has 10 databases, you could select 5 of those databases to be part of the Availability Group and those would then be synchronized to the other replicas. The 5 databases not included in the Availability Group would remain untouched and only on the server that they were on originally. The same is true of the secondary replicas. They will already contain all of the databases that are part of the Availability Group, but they can also contain a number of local databases that are not part of the Availability Group.
  • Can we use SQL Server Standard edition for only two nodes? (reference:
    • No, you cannot. What the High Availability matrix is showing is running a two node Failover Cluster Instance on Standard edition. There are only two editions of SQL Server that will support Availability Groups, and those are Enterprise and Developer editions, and both edition support up to 5 replicas in an Availability Group. Remember that AlwaysOn is just a marketing term that Microsoft uses to describe several of their High Availability features, and is not a feature in itself. Don’t let their overuse of this term confuse you.
  • Should the listener be a separate server? Does the listener need to have SQL Server installed on it?
    • The listener name is just a cluster resource name, and is not a separate physical server or cluster node, nor is it a separate SQL Server instance like a Database Mirroring Witness would be. Think of the listener name as just another service that the Windows Server Failover Cluster can host on any of its nodes. The caveat here is that the Availability Group and the Cluster are talking to one another and so the Availability Group makes sure that the listener name is always hosted by the cluster node that is the primary replica of the Availability Group. Therefore it is safe to say that the primary replica (a stand-alone SQL Server Instance installed on a Windows Server Failover Cluster node) is always the host of the listener name (if you created one).

Creating a Login with a SID

For those of you who don’t already know, there is a new PASS Virtual Chapter called DBA Fundamentals that has meetings on the first Tuesday of every month. The chapter specializes in providing training and support for new DBAs and also for seasoned DBAs who would just like to brush-up on their skills. I was fortunate enough to be asked to present at their second meeting on 9/4, and I gave my session about SQL Server Service Broker. One of the questions that I was asked during the session was: How do you create a login with a SID, and more specifically, how do you create and find the SID for that login?

In my session on Service Broker and also in my session on AlwaysOn Availability Groups, I mention that if you’re using AlwaysOn Availability Groups or Database Mirroring that you should create your logins with the same SID on each instance/replica so that when a failover occurs, you don’t have to re-map your database users to their appropriate logins. However, I don’t really go into the detail of how to do that in my sessions, so I wanted to take some time to do that here.

use master;

-- Get a new GUID value to use as the SID
declare @SID uniqueidentifier = newid();

-- In order to use the SID in a create login statement, it must be in binary format
declare @SID_Binary varbinary(max) = (select cast(@SID as varbinary(max)));

-- View the SID in GUID and Binary format:
select @SID, @SID_Binary;
-- E72669E3-9FAA-4BCB-8F8F-570EBF114674, 0xE36926E7AA9FCB4B8F8F570EBF114674

-- Here is the statement we really want to run:
--create login SQLDiablo with password='Passw0rd!', sid=0xE36926E7AA9FCB4B8F8F570EBF114674;

-- But that requires us to paste in the SID. There has to be a better way:
declare @UserName nvarchar(128) = 'SQLDiablo', @Password nvarchar(max) = 'Passw0rd!';
declare @Query nvarchar(max) = 'create login ' + @UserName + ' with password=''' + @Password + ''', sid=0x' + cast('' as xml).value('xs:hexBinary(sql:variable("@SID_Binary") )', 'varchar(max)') + ';';

select @Query;
execute sp_executesql @Query;

-- Since varbinary can be a little tricky to work with in dynamic SQL, XPath is our friend.
-- Above we converted the value of @SID_Binary to Hex using XPath's value method (don't forget to add 0x to the beginning of it).

-- Get the SID for the login we just created, as a GUID
select, cast(sp.sid as uniqueidentifier) SID_AS_GUID from sys.server_principals sp where = 'SQLDiablo';

-- SQLDiablo, E72669E3-9FAA-4BCB-8F8F-570EBF114674

set @Query = 'drop login ' + @UserName + ';';
execute sp_executesql @Query;

AlwaysOn Availability Groups, Isolation Levels, & Selects Blocking the Redo Thread

This past weekend, I had the pleasure of presenting my session SQL Server 2012 AlwaysOn Readable Secondaries at SQL Saturday #126 in Indianapolis, IN. In that session, I covered the basics of AlwaysOn Availability Groups, how to set them up, failing over, and using readable secondary replicas for read-only workloads and backups. During the session, I always make sure to mention the possibility of queries running on the secondary replica blocking the redo thread that is trying to write changes from the primary database to the secondary database.

After mentioning this caveat, I was asked a very good question: Can you use transaction isolation levels (such as snapshot isolation) with AlwaysOn Availability Groups, and would they help to avoid the issue of read-only queries on secondary replicas blocking the redo thread? In order to answer this question, I’m going to break it two parts, and then we’ll work through a couple demos to illustrate the answers.


Q: Can you use transaction isolation levels with AlwaysOn Availability Groups?
Yes and no. Yes, you can set whatever isolation level you would like when running queries on the secondary replicas, and SQL Server will not return an error. However, SQL Server will automatically override the isolation level (and ignore all lock hints) when querying a read-only replica, and instead force the queries to use snapshot isolation.

Per the “Benefits” topic of the Active Secondaries: Readable Secondary Replicas (AlwaysOn Availability Groups) BOL article:

Read-only workloads use row versioning to remove blocking contention on the secondary databases. All queries that run against the secondary databases are automatically mapped to snapshot isolation transaction level, even when other transaction isolation levels are explicitly set. Also, all locking hints are ignored. This eliminates reader/writer contention.

Q: Does using snapshot isolation prevent read-only queries from blocking the redo thread on secondary replicas?
A: After reading the above quote from BOL, you would think that read-only queries won’t block the redo thread, and that is true for DML statements. However, it is not true for DDL statements. Sure, snapshot isolation prevents read-only queries on the secondary replicas from taking locks and preventing other DML statements against the database from executing, but it doesn’t prevent the read-only queries from taking schema stability locks and blocking DDL statements.

If you take a look later on in the same article I quoted above, you will come to the “Read-Only Workload Impact” section under the “Performance Considerations” topic:

Also, read-only workloads on the secondary replicas can block data definition language (DDL) changes that are applied through log records. Even though the read operations do not take shared locks because of row versioning, these operations take schema stability (Sch-S) locks, which can block redo operations that are applying DDL changes.


In order to have a reference architecture to use for this demo, I’m going to use the Availability Group that I setup as part of the presentation that I mentioned at the beginning of this article. If you would like to play along at home, you can download the scripts and slide deck for that session here. This Availability Group is called DemoAG, and has three replicas, AlwaysOn1 (primary), AlwaysOn2 (secondary), and AlwaysOn3 (secondary). There are two databases participating in the Availability Group: AdventureWorks2012 and AdventureWorksDW2012. I’m going to be using AdventureWorks2012 for this demo.

Testing isolation levels and lock hints:

Unfortunately, I haven’t come up with a good way to demonstrate that SQL is overriding the transaction isolation level on the readable secondary because it is actually doing this internally (I’ll keep looking into this to see if there is a good way to demonstrate it). I can, however, demonstrate that SQL Server is ignoring lock hints on the read-only replicas. I can do this by beginning a transaction, executing a select query with the tablockx lock hint, and leaving the transaction open. I then open a new query window and query sys.dm_tran_locks to see if there is a table lock on the table. If I do this on the primary, you will see an open table lock on the table. If I do this on the secondary, there is no open table lock. If I then try to update data on the primary replica, I am able to see the statement completes and that the data has been written to the replica, even with the open transaction on the secondary that should be holding a table lock.

-- Execute this in a session of its own on the primary replica:
begin tran;

select * from ErrorLog with (tablockx);

-- With the above transaction open, launch a new query window on
-- the primary replica, and execute this query
-- (you should notice a table lock on the ErrorLog table):
use master;

, dtr.*
from sys.dm_tran_locks dtr
join sys.databases d
on = 'AdventureWorks2012'
and d.database_id = dtr.resource_database_id
join AdventureWorks2012.sys.tables t
on t.object_id = dtr.resource_associated_entity_id;

-- Now try running this insert in the same query window as the sys.dm_tran_locks query
-- (you should get blocked until you commit or rollback the transaction in the other session):
insert AdventureWorks2012.dbo.ErrorLog(UserName, ErrorNumber, ErrorMessage)
values('DemoUser', 1, 'Demo Message');

-- Go ahead and close both query windows, and connect to one of the secondary replicas.
-- Now, try running through the scenario above on the secondary replica (minus the insert).
-- You should notice that there is no table lock being held, even though we explicitly requested one.

Testing read-only queries blocking DDL statements:

As far as demonstrating that read-only queries can block DDL statements, all we need to do is run a long-running query on the secondary replica and simultaneously try to alter the table that the query on the secondary is using. We should notice that the redo queue size on the secondary then increases and stays that way until we either kill or complete the read-only query. This indicates that the redo thread is not able to commit the DDL changes because the select statement is taking a schema stability lock and preventing the alteration to the table.

-- Run this query on the secondary replica
-- (it's horribly ugly and inefficient, but the idea is to hold a
-- schema stability lock on ErrorLog for as long as we can):

from AdventureWorks2012.dbo.ErrorLog el
cross apply (
row_number() over (order by as number
from AdventureWorks2012.sys.sysobjects s1
cross apply AdventureWorks2012.sys.sysobjects s2
cross apply AdventureWorks2012.sys.sysobjects s3
cross apply AdventureWorks2012.sys.sysobjects s4
cross apply AdventureWorks2012.sys.sysobjects s5
) as InnerNumbersTable
) NumbersTable
group by NumbersTable.Number
order by NumbersTable.Number desc;

-- While that beast is running on the secondary replica, run the
-- following on the primary replica:

alter table AdventureWorks2012.dbo.ErrorLog add DemoColumn varchar(10) null;

-- Now go back to the secondary replica, open a new query window, and run the
-- query below (you should notice that the redo queue is greater than 0,
-- which indicates that the redo thread is having trouble applying changes
-- to the database):
from master.sys.dm_hadr_database_replica_states dhdrs
join master.sys.databases d
on d.database_id = dhdrs.database_id
and = 'AdventureWorks2012';

Maintenance Plan Index Rebuild Gotcha

If you’re anything like me, you probably have a tendency to read what you think is on the page instead of what is actually there. Here’s a little gotcha that I discovered today when setting up the Rebuild Index task in a maintenance plan:

Click to view full size


As you can see, the percentage they are asking for is the amount of free space to leave on the index pages, and NOT the fillfactor for the index pages. Had I read this screen more carefully yesterday, I would have saved myself and my colleagues much pain and suffering today (a database with 90% free space in the indexes doesn’t perform well AT ALL…lol). :-(

Service Broker & AlwaysOn Availability Groups: Odd Transmission Queue Behavior

I’ve been working on a project over the past several months that will utilize Service Broker and AlwaysOn Availability Groups to meet some of the HA and DR goals of the company I work for (more info here). Just recently, I was able to implement the full solution in my development lab and point an instance of our website at it. While we were working out some kinks in our database and website to get the two working well with my Service Broker Replication project, I began noticing some odd behavior in Service Broker when it’s used with AlwaysOn Availability Groups, and I wanted to blog about it in an attempt to see if anyone else has seen this issue and might have an idea how to address it.

Update: I’ve also posted this question on

The Setup:

I have a Hyper-V host running 6 Windows Server 2008 R2 VMs (BTDevSQLVM1-BTDevSQLVM6). The VMs are grouped into 2-node WSFCs with node and file share quorum. I’ve installed standalone SQL 2012 Developer Edition instances on each of the VMs, and created an Availability Group with a listener on each cluster (SBReplDistrib, SBRepl1, & SBRepl2).

For the purpose of this blog post, I’ll be focusing on the communication between SBRepl1 and SBReplDistrib. The illustration below shows the Service Broker objects for each side of the conversation:

The Service Broker endpoints and routes are setup per this MSDN article.The SBRepl_Receive route in MSDB is for the local server’s service (//SBReplDistrib/SBRepl on SBReplDistrib, and //SBRepl1/SBRepl on SBRepl1), and points to the local instance. The SBRepl_Send route on SBRepl1 maps service //SBReplDistrib/SBRepl to TCP://SBReplDistrib:4022, and the SBRepl_Send_SBRepl1 route on SBReplDistrib is a similar mapping for the service on SBRepl1.

The Expected Behavior:

My understanding of how Service Broker handles message sending and receiving is thus (This is pretty simplified. There is a lot more detail about this process in Klaus Aschenbrenner’s (blog | twitter) book “Pro SQL Server 2008 Service Broker”):

  1. The initiator app creates a message (in this case, well formed XML)
  2. If there is an existing dialog conversation between the initiator service and the target service that is in the conversing status, the app can simply send the message on the existing conversation handle. Otherwise, the initiator app should begin a dialog conversation between the initiator service and the target service and send the message on that conversation handle.
  3. The message is placed in the sys.transmission_queue system table and Service Broker begins making attempts to deliver the message to the target service.
  4. Service Broker looks for an appropriate route and remote service binding and uses them to determine the address to connect to in order to deliver the message.
  5. Service Broker opens a connection to the target, authenticates, and delivers the message to the target service broker.
  6. The target Service Broker attempts to classify the message and determine what local service will handle the message (it uses route data in the msdb database for this).
  7. The target Service Broker delivers the message to the target service’s queue
  8. Once the message is successfully delivered to the target queue, the target Service Broker looks for route information back to the initiator and attempts to deliver an acknowledgement that the message was received.
  9. The initiator’s Service Broker receives the acknowledgement and uses routing information in MSDB to determine what local service the acknowledgement is for.
  10. Upon successful routing of the acknowledgement to the initiating service, the message is then removed from the sys.transmission_queue system table.
  11. If the initiator does not receive an acknowledgement that the message was received, it will periodically retry delivering the message to the target. If the target has already received the message, it will simply drop any additional delivery retries and send acknowledgements for them.

The Odd Behavior:

Step 11 is where I am seeing some very odd behavior with Service Broker and AlwaysOn. I see the message getting delivered to the target and processed successfully, and I also see the acknowledgement getting sent back to the initiator and received. However, the message remains in sys.transmission_queue as though no acknowledgement was received. To make things even more strange, Service Broker isn’t attempting to resend the message like I would expect it to if the acknowledgement wasn’t received. Instead, the message simply remain in the sys.transmission_queue, and as new messages are sent, they get delivered, acknowledged, and they too remain in the sys.transmission_queue. It seems to me like service broker is getting the acknowledgements and therefore stops trying to deliver the message, but doesn’t remove it from the sys.transmission_queue for some reason. The transmission_status for these messages remains blank, which should indicate that Service Broker hasn’t attempted to deliver them yet.

I checked the retention setting on the service queue, and it is set to off, but that should only impact the service queue and not the sys.transmission_queue. I have also traced both sides of the conversation using SQL Profiler, and I am able to see the message getting sent and the acknowledgement being sent back to the initiator and getting received (see XML trace data at the end of this post).

One odd thing did jump out at me in the traces though. I noticed that both sides seemed to be a bit confused about the TCP connections, because messages are sent from the IP address of the node itself while the service routes and the messages themselves point to the name/IP of the AG listener. This confusion appears to be causing each side to close the existing connection between the two services and create a new one in order to deliver a message or acknowledgement. I’m not sure if this is normal or not or if it has anything to do with why the acknowledgements aren’t being handled correctly, but it was the only thing I could see that could possibly explain the odd behavior.

The Plea for Help:

At this time, I don’t have a solution to this message retention issue other than to manually end the conversation with cleanup on both sides, and that’s not really something I want to do. If you have any ideas as to why this might be happening or what I can do about it, please leave me a comment and let me know. If there is any additional information that you would like me to provide about my setup or about the issue, please let me know in the comments as well. I will post a followup to this post if/when I find a solution to this issue.

The Trace Data:

I’ve removed the trace data from this blog post for brevity and security reasons. Please leave me a comment if you would like to review the trace files.

Service Broker Overview

No Junk Mail

No poison messages either, please.

Service Broker has been a core feature of all editions of SQL Server since SQL 2005. It allows for the asynchronous delivery and processing of messages sent within a database or between databases (either on the same instance or in separate instances). In some ways, you can think of Service Broker like the USPS (in fact, SQL Server even uses Service Broker as part of Database Mail to send email messages). This blog post will use the analogy of sending a letter to a friend (yes, a snail-mail letter…yes, I know that nobody does that anymore… :-P) to give a brief overview of the various components of Service Broker and how they work together, allowing you to send a message from one instance to another instance, where the message will be processed.

Encryption & Security

By default, Service Broker will encrypt messages sent between databases (on the same instance, or on separate instances), and the best way to allow this is to use certificates. However, setting up certificate-based encryption can be a major pain in the butt, and order of operations is extremely important. The first thing you want to do is create and backup the database Master Key on each database/instance. Then, you should create a user in each database/instance with or without a login (Ex: create UserA on InstanceA and UserB on InstanceB). Once you create the users, then create a certificate for each user and back it up. Next, create a user on each database/instance that has the same name as the user on the other instance/database (Ex: create UserB on InstanceA and UserA on InstanceB), and then restore the certificate for each of the users you just made (Ex: Restore UserACert on InstanceB and UserBCert on InstanceA). Once you have the users and certificates setup correctly, then you can proceed with setting up the Service Broker infrastructure.

It is recommended to use database users without logins to restrict the users’ access to just the database that is receiving messages, but if you use a Stored Procedure to automatically process Messages received on a Queue, and if that stored procedure needs to interact with any other database(s) than the database that holds the Queue, then you will need to create a user with a login, and map that login to the other database(s) that the Stored Procedure needs to interact with.

It helps to have a central location where you can store the certificate files so that you can get to them later if you need to add more instances to your Service Broker topology.

-- Example script to setup security on AdamServer
USE Demo;

-- Create the database master key (you should back it up too, but that's not required for this to work)
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'MyReally,Really,Really,ReallyStrongPassword...Really!';

-- You can optionally create a login to use. If you don't need the login, just create the user "WITH NO LOGIN" (instead of "FOR LOGIN AdamUser").
CREATE LOGIN AdamUser WITH PASSWORD = 'SomeOtherReally,Really,ReallyStrongPassword...Really!';

EXEC sp_addrolemember @rolename='db_owner', @membername='AdamUser'; -- Add security as you see fit. You probably shouldn't give away the keys to the house like I did here.

-- Create a certificate and bind it to the database user, then back it up (you must do the backup here because you will use it on the remote server to recreate this user and certificate there).
WITH SUBJECT = 'Certificate for database user AdamUser',
EXPIRY_DATE = '12/31/2099';

TO FILE = 'c:\AdamUser_cert.cer' -- It's probably a good idea to use a secure network share here
PRIVATE KEY (FILE = 'c:\AdamUser_cert.pvk', -- It's probably a good idea to use a secure network share here
ENCRYPTION BY PASSWORD = 'SomeOther,OtherReally,Really,ReallyStrongPassword...Really!');

-- Recreate the remote server's user and certificate on this server (I'm assuming you've already done the steps above on the remote server. The user doesn't need a login.)

EXEC sp_addrolemember @rolename='db_owner', @membername='FriendUser';

FROM FILE = 'c:\FriendUser_cert.cer' -- It's probably a good idea to use a secure network share here
PRIVATE KEY (FILE = 'c:\FriendUser_Cert.pvk', -- It's probably a good idea to use a secure network share here
DECRYPTION BY PASSWORD = 'SomeOther,OtherReally,Really,ReallyStrongPassword...Really!');

Message Types & Contracts

In order to hold a conversation using Service Broker, we first have to specify one or more message types. A Message Type is simply a logical way for you to specify what kind of content will be contained in a message. In this analogy, our message type will be called FriendlyLetter (All messages sent with this type will contain letters sent between friends. We could also create another message type called BusinessLetter that would be letters sent between businesses.) and specifies that all messages sent with this Message Type will be validated as WELL_FORMED_XML. Once we have our Message Type(s) created, then we can create a Contract. In this case, our Contract will be called LetterContract, and it specifies that the parties involved in the conversation are allowed to communicate using the FriendlyLetter Message Type only.

Authorization dbo
Validation = WELL_FORMED_XML;

Authorization dbo
(FriendlyLetter SENT BY ANY);

Queues & Services

In order to send and receive our messages, we need a Queue (or mailbox) to send and receive the messages with. When we create our LetterQueue, we can optionally specify an automated process (stored procedure) that will handle the messages received (in this case, dbo.FriendlyMessageHandler). If you specify an automated process to receive the messages, you should also specify how many concurrent copies of that process can be running (in this case, just 1). If you don’t specify an automated process to receive the messages, then you will need to receive them manually (see the Conversations & Messages section).

Just like in real life, our messages won’t get delivered unless there is a Service to deliver them for us (In real life, this would be the USPS. Unlike in real life, the service is only for a single Queue (whereas the USPS delivers to all mailboxes in the US). In this case, we will call our Service the LetterService, and specify that it will be for our LetterQueue, and that it will be owned by the local user we created in the Encryption & Security section.

STATUS = ON, -- Is the queue enabled to receive messages?
RETENTION = OFF, -- Does the queue keep messages that have been received off the queue, or delete them?
ACTIVATION ( -- The Activation clause is optional and only needs to be specified when automatic message processing is being used
PROCEDURE_NAME = [dbo].[FriendlyMessageHandler], -- The stored procedure that is run when a message is received
MAX_QUEUE_READERS = 1, -- How many times can we run the proc above concurrently?
EXECUTE AS 'AdamUser', -- What user will the proc be executed as?
STATUS = ON -- Is automatic activation turned on or off for this queue?

CREATE SERVICE [//AdamServer/LetterService] -- I recommend naming your services: ///, so that it's easier to tell what server and service you're conversing with
AUTHORIZATION [AdamUser] -- The user that owns this service
ON QUEUE LetterQueue -- What queue is this service for?
([LetterContract]); -- What contract governs conversations over this service?

GRANT SEND ON SERVICE::[//AdamServer/LetterService] to FriendUser; -- Allow the other server's user to send this server messages

Endpoints, Routes, & Remote Service Bindings

Now that we have our Queue and Service created, we need to create a Route and Endpoint so that the Service knows where and how to send messages. We only need one Endpoint per instance, and in this case, our Endpoint will be called SBEndpoint. In order for our Service to know where to send the messages to, we need to specify a Route. In this case, our Route will be called SendLetterRoute, will specify that it is for the receiving server’s LetterService, and will also specify where that service can be found. We also need to create a route in MSDB for receiving messages to our local service, which I’ve called ReceiveLetterRoute.

The last piece of infrastructure we need to create is the Remote Service Binding. This tells SQL Server which user to use when sending messages to the receiving service. Just like with Routes, we need to create one Remote Service Binding for each receiving service we want to send to.



CREATE ROUTE ReceiveLetterRoute
SERVICE_NAME = N'//AdamServer/LetterService',

USE Demo;

CREATE ROUTE SendLetterRoute
SERVICE_NAME = N'//FriendServer/LetterService',
ADDRESS = N'TCP://FriendServer:4022';

TO SERVICE N'//FriendServer/LetterService'
WITH USER = [AdamUser]; -- The user to send the message as

Conversations & Messages

Once all of the infrastructure pieces are created on both the sending and receiving sides, we can begin to hold a conversation/Dialog. When we begin our Dialog, we specify which local service is sending the message and to which remote service it is sending the message. We must also specify the Contract that governs how the two sides may communicate on that Dialog. In this case, our Dialog will be sent from //AdamServer/LetterService to //FriendServer/LetterService on Contract LetterContract. Once the Dialog has been started, we can send messages (of any type allowed by the contract), and the messages will be received in the same order in which they are sent. Since it is possible to send more than one message over a single Dialog, it is recommended not to begin new Dialogs unless absolutely necessary (beginning a new Dialog creates overhead as the two servers authenticate and encrypt a new channel of communication, whereas using an existing Dialog does not incur that overhead).

When the receiving side gets a message, the message is placed in the Queue specified by the receiving Service. If there is an automated process specified to process messages on that queue, then that Stored Procedure will be executed and will process the message. In order to process a message (either automatically or manually), you must Receive the message off of the Queue. In most cases, you will receive one message at a time to make sure that the messages are received and processed in the correct order.

-- Execute on sender:
DECLARE @ConversationHandle uniqueidentifier;
DECLARE @XMLMessage XML = 'Hi!';

BEGIN DIALOG @ConversationHandle
FROM SERVICE [//AdamServer/LetterService]
TO SERVICE N'//FriendServer/LetterService'
ON CONTRACT [LetterContract]

SEND ON CONVERSATION @ConversationHandle
MESSAGE TYPE [FriendlyLetter]
-- Execute on receiver:
declare @ConversationHandle uniqueidentifier, @Service varchar(128), @Contract varchar(128), @MessageType varchar(128), @XMLMessage xml;

@ConversationHandle = conversation_handle,
@Service = service_name,
@Contract = service_contract_name,
@MessageType = message_type_name,
@XMLMessage = CAST(message_body AS XML)
FROM LetterQueue
), TIMEOUT 5000; -- Wait 5 seconds

Bringing It All Together

Service Broker Diagram

Click the image to see it full size in a new window


Lab environments, hacking, kittens, and you!

Shelby yawining.

Please, think of the kittens!

I had lunch with my good friend Allen White (blog | twitter) on Thursday and he was picking my brain about a project I’m working on (I’ll be blogging about it in the weeks to come), and it made me think about how important it is to have a lab environment. It also made me think about a blog post that my boss Matt shared with me about Facebook’s culture of “The Hacker Way”.

At Facebook, they value proofs-of-concept and evidence that something will work over a person’s clout or lengthy talks about a subject. I can really respect that. I work at a small company where time and resource are limited. EXTREMELY limited. We make decisions quickly and we need to move on them even quicker. If we don’t, we’ll become stagnant, and for a company to become stagnant is to be left in the dust.

Don’t get me wrong, I’ve had plenty of lengthy and in-depth discussions about various things I’ve worked on, but at some point you have to get your hands dirty and go from an idea in your head to an actual working solution. Besides, if you can actually show that something will work, it carries much more weight than if you only prove it in theory. For example, you could talk until you’re blue in the face trying to convince me that shrinking a database isn’t a horrible thing that kills a kitten* every time you do it, but until you can come to me with actual evidence of a time where you did it, bad things didn’t happen (like fragmentation or corruption), and the kittens are still alive and meowing, I’m not likely to believe you.

This is where having a lab is key. It allows me as a developer & DBA to go from having a couple nebulous ideas in my head to an actual working prototype, while messing up and making mistakes along the way. Mistakes are a part of learning and a part of life. I’ll never get everything I do right on the first try, and that’s OK. After all, I AM human (contrary to popular belief). In fact, it’s more than OK. The more mistakes I make and have to fix, the better I’ll learn whatever it is I’m working on. Having a lab environment allows me to do that, while keeping those rapid changes and mistakes away from my actual development, testing, and production environments. As a bonus, once I have done my proof-of-concept and received the go-ahead to build the real thing, I can take what I did in my lab, bring it over to my development environment, and continue on my way to making the final solution instead of starting over from scratch.

Having a lab also allows me to be creative, since I know it’s safe to try anything I want. ANYTHING! (insert evil laugh here) It doesn’t matter how crazy the idea sounds. The important thing is how crazy the idea actually is to implement and whether it works or not. Often times something sounds crazy enough that it just might, and having a lab environment lets me show that to be true. Or not. My ideas don’t always work out, but since I built them in a lab, it’s no big deal if they break or go horribly, horribly wrong. Hey, at least I found out in my lab, at scale, instead of in production with our business on the line.

Another great aspect of having a lab and working “The Hacker Way” is that I can learn about things that I’ve never used before or that I might never get to use in my regular work. This can be great for studying to get a certification or for learning about new product features. For example, the project I’m working on now involves SQL Server Service Broker and can sit on top of SQL Server 2012’s AlwaysOn Availability Groups. I’ve never used either of those features before. Heck, I never actually thought I would use them. But having a lab environment empowered me to not only consider them as a possible solution, but to learn about them and eventually prove that they would work for us. That wouldn’t have happened without my lab.

With SQL Server 2012 due to launch on 3/7/12, my lab has also been a powerful learning tool for me to gear-up for my MCITP DBA upgrade exams and to get ready to eventually upgrade our servers at work. Sure, I probably could learn about SQL Server 2012 and study for my exams with books and simulators, but having a lab with several instances up and running is a much more powerful way to learn about it and gain practical, hands-on experience with it.

So, if there is one thing I’d like you to take away from this post, it’s that you need to get a lab. It’s critical! Not only to you and your knowledge, skills, and career, but to your company as well. I was able to convince my boss to get me a Dell R410 to use as a Hyper-V host for my lab, and I couldn’t be happier. We lease it, and it’s really not very expensive at all, especially when you look at all of the benefits we’re getting out of just a single piece of hardware.


*Disclaimer: No kittens were harmed in the making of this blog post. Please help control the pet population and have your pet spayed or neutered. Rescue animals can make wonderful pets, please support your local animal shelter and adopt your next pet.