Policy-Based Management with Central Management Servers

                As a database administrator, you may be in charge of multiple databases.  But often times, it’s the multi-server/instance burden that becomes the most “click” intensive.  What I mean by that is you have to connect to multiple servers and multiple instances, or in SSMS connect your Object Explorer to a bunch of different instances, and before you know it you have a spaghetti cluster of instances on your desktop.  You end up spending 90% of the time just mis-clicking the wrong one, or worse, executing T-SQL against an unintended instance.
                Sometimes, the above scenario is necessary.  But for much object/database/instance/server administration, this can be carried out with PBM (Policy-Based Management) with the combination of CMS (Central Management Servers).  This central location allows for the convenient method for do many tasks, and this blog post will focus on PBM with CMS.
                Policy-Based Management in and of itself is a great addition with SQL Server 2008.  But when you have dozens (or even hundreds) of instances that require a standard set of policies and conditions, manually pushing them out and evaluating them would be absurd (and let’s be honest…we should be automating at every possible junction).
                The first step to utilizing PBM with CMS is registering a Central Management Server:  Through SSMS in the Registered Servers section, right-click on Central Management Servers and select Register Central Management Server….  Select a SQL Server 2008 or higher instance to act as the CMS:
                To logically group your servers, I recommend creating a Server Group:  Right-click on the CMS and select New Server Group….  Name the Server Group to your desire.  I will be calling mine “2008R2” as this Server Group will house my 2008 R2 instances, and “2012” to contain my 2012 instances:
                Now I want to register a server in my 2008R2 Server Group:
                My Registered Servers pane should look like the below now.  Mind you, there are only a couple of instances here and this is a very over-simplified implementation of CMS:
                There are a couple of ways to approach PBM with CMS:  1) Create a Policy on the Central Management Server and use that to evaluate against managed servers, or 2) create a Policy and export/import to the managed servers.  I will be focusing on the former method.
                Go to the CMS and create a Policy.  Because this is not an in-depth look at the extreme flexibility and power of PBM, I will just use an MS-shipped out-of-the-box Best Practices Policy to import as an XML file.  In the Object Explorer of my CMS I will go into Management, Policy Management.  Right-click on Policies and select Import Policy….  Navigate to the desired Policy File (XML file format) and select it.  I chose the famous Database Auto Close Policy to work with:
                Now that I have a Policy created on my CMS, so I will not worry about duplicating this effort and exporting/importing into my registered servers.  Back at the Registered Servers SSMS pane, I want to be able to evaluate this Policy against all of the servers registered under my CMS named “CMS”.  I simply right-click on the CMS and select Evaluate Policies….  Now I will need to select the source.  In this case, my CMS will house all of the Policies, so I’ll choose this server.  
Select the Policy(ies) that you want to evaluate against the registered servers, and click Evaluate.  You should be staring at your evaluation results for the registered servers in the scope of where you selected to evaluate policies:
 
                And there it is!  We have just used the administrative power of Policy-Based Management, and ease of Central Management Servers to evaluate a Policy against a subset of registered servers.  If there are any comments, questions, or issues please feel free to leave a comment or email me at sqlsalt@gmail.com.
Advertisements

Data Corruption Part 2: Clustered Index Corruption

                In the last post (Part 1) I discussed how to approach a nonclustered index data corruption issue.  What I am going to be explaining here is what to do with data corruption of the clustered index.  I want to start out with a copy of my AdventureWorks2012 database:
restore database AdventureWorks2012_CorruptCI
from disk = ‘C:\MyDir\adventureworks2012_base.bak’
with
       move ‘adventureworks2012_data’to ‘C:\MyDir\AW_Data_CorruptCI.mdf’,
       move ‘adventureworks2012_log’to ‘C:\MyDir\AW_Log_CorruptCI.ldf’,
       stats = 5
go
                Now I want to corrupt a clustered index.  Last week in Part 1 I went into a long explanation and demonstration how to corrupt data using a Hex Editor.  I’m not going to rehash, as the process is basically the same.  The only difference between last week’s nonclustered index and this week’s clustered index corruption is that I want to choose the clustered index of a table, and I have personally chose the Person.Person table (you’ll see why soon.  Hint:  constraints).
                So corrupt a clustered index, any clustered index.  Feel free to work along with me and corrupt the Person.Person clustered index.  Now set the database back online and run DBCC CHECKDB:
alter database AdventureWorks2012_CorruptCI
set online
go
dbcc checkdb(‘AdventureWorks2012_CorruptCI’) with no_infomsgs
go
                Notice what you see.  Very much like last week’s result, we get a handful of errors.  Notice the summary of the DBCC command:
CHECKDB found 0 allocation errors and 4 consistency errors in table ‘Person.Person’ (object ID 1765581328).
CHECKDB found 0 allocation errors and 4 consistency errors in database ‘AdventureWorks2012_CorruptCI’.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (AdventureWorks2012_CorruptCI).
                I have bolded the above portion because I believe it is worth discussing.  SQL Server actually plants the thought in the DBA’s head to use REPAIR_ALLOW_DATA_LOSSwith DBCC CHECKDB as the “minimum”.  Now, one may be thinking “well this is the minimum, and SQL Server said it is okay then I think I’ll go right ahead.  It mustbe the right answer.”  Not always.  And we’ll see why now.  Let’s say you did run DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS:
alter database AdventureWorks2012_CorruptCI
set single_user
with rollback immediate
go
dbcc checkdb(‘AdventureWorks2012_CorruptCI’, repair_allow_data_loss)
go
                Great!  It finishes, and all the issues were fixed.  So you grab your car keys, call your significant other, and tell them you’re on the way home.  Easy workday, no late dinner tonight!  Or so you thought.  Meanwhile, you’re grinning from ear to ear because you’ve fixed it… but there is something that was missed:  Something very important.  And then you receive the dreaded phone call.  Here’s what was missed.  Take for instance my Person.Person table that I corrupted.  Even after the REPAIR_ALLOW_DATA_LOSSnormal queries like this are successful:
select *
from Person.Person
                So what could possibly be the problem?  Here’s what:  REPAIR_ALLOW_DATA_LOSS does just that… it may possibly delete data in order to fix the corruption.  But there is still yet another looming issue here.  The answer is in this next DBCC statement:
use AdventureWorks2012_CorruptCI
go
dbcc checkconstraints
go
                On my end that returned a whopping 25 foreign key relationships that were flagged as culprits:
                What looks like happened is that during my DBCC CHECKDB with the option to REPAIR_ALLOW_DATA_LOSS, SQL Server deleted five rows of data.  Well, because there were foreign key constraints referencing Person.Person, that has now become a pretty big problem (not to mention the underlying fact that we have just deleted data).
                The solution?  There is a good chance that when the root data, in this case the clustered index, is corrupt you will need to utilize a non-corrupt backup that you can restore side-by-side with the corrupt database and copy data over.  This is yet another reason why a great disaster recovery plan is necessary.  Things happen, data gets corrupted.  And there’s not always a quick and easy command, statement, or query to get it back to normal.  Do the appropriate investigation as to what exactlyis corrupt, and the best means to fix it.
                The code for this test can be found on my GitHub SQLSalt repository.  A direct link to this test code is here.  NOTE:  THIS CODE AND TEST IS MEANT FOR ISOLATED TEST ENVIRONMENTS ONLY.  DO NOT RUN ANYWHERE NEART PRODUCTION.  If there are any comments, questions, or issues please feel free to leave a comment or email me at sqlsalt@gmail.com.

Data Corruption Part 1: Nonclustered Index Corruption

                In this mini-blog-series I will be going over the different victims of data corruption:  Mainly nonclustered indexes (this post), clustered indexes, and system tables.  The first part of this blog post is how to corrupt a nonclustered index in a TEST environment in order to learn and practice the resolution to such a problem.  The base database I’ll be using is AdventureWorks2012 (it can be found here).  I then restored a copy of AdventureWorks2012 to corrupt, and called it AdventureWorks2012_CorruptNCI:
restore database AdventureWorks2012_CorruptNCI
from disk = ‘C:\DatabaseFiles\AdventureWorks\AdventureWorks2012_base.bak’
with
       move ‘AdventureWorks2012_Data’
              to ‘c:\MyDataDir\AdventureWorks2012_CorruptCI_Data.mdf’,
       move ‘AdventureWorks2012_Log’
              to ‘c:\MyLogDir\AdventureWorks2012_CorruptCI_log.ldf’,
       stats = 5
go
Corrupting the Data
                First off, I want to emphasize that we are corrupting data!  In other words, THIS NEEDS TO BE DONE IN AN ISOLATED TEST ENVIRONMENT WITH TEST DATA AND DATABASES.  What we want to do now is corrupt a nonclustered index in our new AdventureWorks2012_CorruptNCI database.  I randomly chose the Person.EmailAddresstable, so I wanted to get all of the indexes on that table:
select
       t.name as table_name,
       i.name as index_name,
       i.index_id,
       i.type_desc
from sys.indexes i
inner join sys.tables t
on i.object_id = t.object_id
where t.name =‘EmailAddress’
order by i.index_id
                After viewing the result set, I see there is nonclustered index on this table called IX_EmailAddress_EmailAddress (note the index_idof 2):  That will be our victim.  Now we want to see all of the page information of this index.  In order to do this, we’ll utilize the DBCC IND command to output a record per index page showing us certain page metadata:
dbcc ind(‘adventureworks2012_corruptnci’, ‘person.emailaddress’, 2)
go
                This shows us a list of all pages in the nonclustered index, IX_EmailAddress_EmailAddress.  Choose a page, any page.  This page will be the one that we target for data corruption.  I have randomly selected a data page with an ID of 18250 (your PagePID may vary).  Now we want to just get a glimpse of the data contained on that page, and to do that we’ll make use of the DBCC PAGEcommand:
dbcc page(‘adventureworks2012_corruptnci’, 1, 18250, 3)
go
                Now that we know we want to corrupt our specific page (PagePID 18250 in my case), I’ll do a quick calculation to get the page offset in the file.  We know that a page is 8 KB = 8192 B:
select 18250 * 8192
— page offset: 149504000
                I get a resulting page offset of 149504000.  In order to actually corrupt the containing data file, we need to take the database offline to release the NTFS lock on the file:
use master
go
alter database AdventureWorks2012_CorruptNCI
set offline
with rollback immediate
go
                Now for the fun part:  Corrupting the data.  You can use a hex editor to alter the contents of the data file, and the one I typically use for direct file manipulation is XVI32 Hex Editor (it’s free and exactly what you need for this).  It can be found here.  The following explicit instructions are directions for this particular application, so if you are not using the XVI32 Hex Editor, then you may need to do different steps. 
Open up the AdventureWorks2012_CorruptNCI data file that contains the nonclustered index.  Click on Address and then GoTo and input your page offset (ensure that the decimal format is selected).  In my case, I entered in 149504000.  Click OKand then XVI32 will take you right to the beginning of that particular page.  On the right you will see remnants of the actual data of the table (if you’ve opted to use the EmailAddress table as well, you’ll see AdventureWorks email addresses).  On the left will be the hex representation of that data.  Click anywhere on that data page (I recommend using the right side to navigate to a character), and then zero out (00) the hex value for that byte.  That is enough!  Our data is now corrupted.  Save and close the data file.
Back in SSMS, set the database online:
use master
go
alter database AdventureWorks2012_CorruptNCI
set online
go
                Now I want to just make sure that we definitely corrupted the index:
use AdventureWorks2012_CorruptNCI
go
select EmailAddress
from Person.EmailAddress
                This should give you an error stating that “SQL Server detected a logical consistency-based I/O error: incorrect checksum…….” so on and so forth.
Fixing a Corrupted Nonclustered Index
                So the above was just a really slick way to corrupt a nonclustered index in our test database.  We broke it, and now it’s time to fix it.  But in reality, we initially won’t know exactly what is corrupt.  So you run a routine DBCC CHECKDB, or a user has an error of sorts, and now it’s time to get to the bottom of it.  Running a DBCC CHECKDB will give you the information you’ll need as to what’s corrupt:
dbcc checkdb with no_infomsgs
                The biggest takeaway here is the partition ID of the corrupt table/index.  In my case, my DBCC CHECKDB tells me the issue is with partition number 72057594050445312.  So now I want to find out exactly what table and index that is for (if you did the above process of data corruption, you already know.  But just for explanation’s sake, view the query results):
select
       p.partition_id,
       o.name as table_name,
       i.name as index_name,
       i.type_desc
from sys.partitions p
inner join sys.objects o
on o.object_id = p.object_id
inner join sys.indexes i
on o.object_id = i.object_id
and p.index_id =i.index_id
where partition_id = 72057594050445312
                Bingo!  This tells me that the corrupt partition is indeed a nonclustered index on Person.EmailAddress, and the faulty index is IX_EmailAddress_EmailAddress.
                When a nonclustered index is corrupt, the first step is to try a rebuild (and, it goes without saying but a little investigation as to the cause of the data corruption is warranted and necessary, otherwise for things like a failing drive, it’ll happen again):
alter index IX_EmailAddress_EmailAddress
on Person.EmailAddress
rebuild
go
                If the rebuild isn’t possible, you may need to drop and recreate the index.  The easiest way to do this is through SSMS:  Right-Click on the index and select Script Index as DROP and CREATE to New Query Window.
                And, just for good measure I’ll run my DBCC CHECKDB to ensure that all is well.  And in my case, it is:
— run DBCC CHECKDB to ensure the database is back to normal
dbcc checkdb with no_infomsgs
Summary
                When we run into corrupt data, our first step is to find out what exactly is that is corrupt.  Far too often we want to run DBCC CHECKDBwith REPAIR_ALLOW_DATA_LOSS, but that isn’t always the right answer.  What was explained above is when the victim of data corruption is a nonclustered index.  The first step is to try to rebuild the index if possible.  No harm, no foul.  In the next couple of blog posts I’ll be going over different victims of data corruption and their fixes.  If there are any questions, comments, or issues please feel free to leave a comment or email me at sqlsalt@gmail.com.
Code
                As always, the corresponding code can be found on my SQLSalt GitHub repository:
Comment
                Thanks to Paul Randal for his insightful comment and correction on my post, in regards to rebuild the corrupt index in lieu of jumping to a drop and create first (as well as scripting out the DROP and CREATE through SSMS).  Paul Randal is a leading expert in all things SQL Server.  Thanks, Paul!!  Much appreciated.  Quoted comment from Paul Randal for reference:
“Beware of dropping and recreating the index – if it’s enforcing a constraint then the constraint could be violated while the index is dropped, meaning it can’t be created again. Better to try rebuilding it first always. I’d also advise using SSMS to script-create the index before dropping it just in case the system tables change in a future release – you don’t have to rely on your own code.”

Filtered Indexes

                There are many performance and space-saving strategies that we can employ as Database Administrators and Developers.  One of the tried-and-true methods is through the utilization of filtered indexes.  Below is how you would create a filtered index:
create index IX_YourIndexName
on YourTable(YourKeyColumn)
where YourColumn < 10
go
                In the above snippet, you can see that creating a filtered index is very much like creating any other index:  The alteration is through the use of the WHERE clause to narrow down the scope of the indexed rows.  Namely two advantages of filtered indexes are the saved space, as well as the maintenance of the index could potentially be less.
                When would you use a filtered index?  Often times we have tables that contain data that won’t ever be queried (or queried often enough for the storage consumption and indexing benefits).  For instance, if you only typically look for products that actually have a cost associated, you are narrowing down your query.  And because it may heed us to have an additional index on this table, the rows that have no cost associated with them will most likely never be retrieved (or not enough to warrant inclusion in the index).
                Here’s an example of real-time space-saving proof.  (The below example uses the AdventureWorks2012 sample database which can be found here)
                I wanted to choose a relatively large table in the AdventureWors2012 database to emphasize the filtered index advantage, so I chose the Sales.SalesOrderDetail table.  Run the below query to view a summary of the indexes on this table:
use AdventureWorks2012
go
select
       object_name(p.object_id) as table_name,
       i.name as index_name,
       i.type_desc as index_type,
       ps.row_count,
       ps.reserved_page_count,
       ps.reserved_page_count *8 as reserved_space_kb
from sys.partitions p
inner join sys.indexes i
on p.object_id = i.object_id
and p.index_id =i.index_id
inner join sys.objects o
on p.object_id = o.object_id
inner join sys.dm_db_partition_stats ps
on p.partition_id = ps.partition_id
where o.is_ms_shipped = 0
and object_name(p.object_id) = ‘salesorderdetail’
order by ps.row_count desc
                Now focus in on the nonclustered index called IX_SalesOrderDetail_ProductID:  It consumes a total of 2624 KB of space.  In order to get what the indexed columns are for this index, we can run the following simple query:
use AdventureWorks2012
go
select
       object_name(i.object_id) as table_name,
       i.name as index_name,
       i.type_desc,
       c.name as column_name,
       ic.is_included_column
from sys.indexes i
inner join sys.index_columns ic
on i.object_id = ic.object_id
and i.index_id =ic.index_id
inner join sys.objects o
on i.object_id = o.object_id
inner join sys.columns c
on c.object_id = i.object_id
and ic.column_id = c.column_id
where o.is_ms_shipped = 0
and object_name(i.object_id) = ‘salesorderdetail’
order by table_name, i.type_desc, index_name
So now we know that the indexed column is ProductID (as the index name would have suggested).  Let’s go ahead and drop that index (for testing purposes only).
— drop the out-of-the-box NCI on ProductID
drop index IX_SalesOrderDetail_ProductID
on Sales.SalesOrderDetail
go
                Why?  Well say that for whatever business domain reason we typically never query records in this table that have a ProductIDof greater than 800.  We don’t want the overhead of maintaining this extra index, so we are going to drop it with hopes of a newer, smaller filtered index.  Now create the filtered index based off of our guidelines:
— create the filtered index
create index IX_SalesOrderDetail_ProductID_Filtered
on Sales.SalesOrderDetail(ProductID)
where ProductID < 800
go
                There you have it.  Now we have a filtered index on the Sales.SalesOrderDetail.  Go ahead and run the index consumption query again to find out our space savings:
use AdventureWorks2012
go
select
       object_name(p.object_id) as table_name,
       i.name as index_name,
       i.type_desc as index_type,
       ps.row_count,
       ps.reserved_page_count,
       ps.reserved_page_count *8 as reserved_space_kb
from sys.partitions p
inner join sys.indexes i
on p.object_id = i.object_id
and p.index_id =i.index_id
inner join sys.objects o
on p.object_id = o.object_id
inner join sys.dm_db_partition_stats ps
on p.partition_id = ps.partition_id
where o.is_ms_shipped = 0
and object_name(p.object_id) = ‘salesorderdetail’
order by ps.row_count desc
                We should get a reserved_space_kbof now roughly 968 KB.  Due to the use of a filtered index (intelligently created, of course), we were able to knock down the size of that index to about 37% of the original size.
                Filtered indexes can be a great way to minimize wasted storage and index maintenance overhead.  With the knowledge of this tool and the appropriate application, it can yield impressive results.
                The applicable code for this blog post can be found on my GitHub SQLSalt repository.  The following links to the source code are:
                If there are any comments, questions, or issues please feel free to leave a comment or email me at sqlsalt@gmail.com.

Dedicated Administrator Connection

                There will be times when a standard connection to SQL Server isn’t possible and as a DBA you’ll need to connect to troubleshoot issues with the instance.  You can achieve this through the Dedicated Administrator Connection (known as the DAC).  The DAC has its own set of resources so that even upon memory contention, you will be able to obtain a connection (provided you are utilizing a login in the sysadmin fixed server role).
Enabling Remote Admin Connections
                By default, you can only connect to the instance’s DAC on the server locally (i.e. Network connections aren’t allowed).  This ensures the surface area of the instance is as small as possible out of the box.  But to enable remote connections to the DAC, you simply need to set the instance configuration appropriately:
— set ‘remote admin connections’ enabled
exec sp_configure ‘remote admin connections’, 1
go
reconfigure
go
DAC Port
                When SQL Server starts up, it will establish a listening port for the DAC.  For a Default instance SQL Server will attempt to establish TCP 1434.  But if it can’t allocate this port, or if it is a named instance then SQL Server will dynamically reserve a port.  During instance startup when this happens, SQL Server logs the established port in the Error Log.  In order to find out what port this is, you can do the following:
exec sp_readerrorlog 0, 1, ‘dedicated admin connection’
go
The Text field will show something like this:
Dedicated admin connection support was established for listening remotely on port 1434.
Connecting to the DAC
                There are two ways to connect to the DAC: through SQLCMD and SSMS.
SQLCMD
                To connect to the DAC through SQLCMD, you will need to utilize the –A switch.  A sample connection would look as follows:
      sqlcmd –S YourServer\YourInstance –U sa –P My$ecur3P@$$w0rd –A
SQL Server Management Studio (SSMS)
                To connect through SSMS, there’s a bit of a “gotcha”.  You can’t connect the Object Explorer to the DAC due to the nature of SSMS using multiple connections normally.  To get around this, you will need to follow the below steps:
1.       Click on the “New Query” button to open a New Query Window: 
2.       Right-click in the query window and mouse-over “Connection”.  If this window already has a connection, you can select “Change Connection…” or if not simply select “Connect…”: 
3.       Now you will be prompted with the connect dialog, and to connect to the DAC through SSMS you just need to prefix the ServerName\InstanceName with the string “admin:”, as follows: 
As shown above, the DAC is a relatively easy portion of SQL Server to analyze and connect to, but equally an essential instrument in the troubleshooting DBA’s toolbox.  If there are any comments, questions, or issues please feel free to leave a comment or email me at sqlsalt@gmail.com.
Code
                As always, this code/information can be found on my SQLSalt GitHub repository:

Database Accessibility

                There are going to be many times throughout the administration of databases that a DBA will need to control how the database is accessed.  Sometimes we need to shut it down altogether, or leave it open just enough to troubleshoot.  There are a few variations that can give you the end result for database accessibility.
Database State
                The database state controls whether or not the database is accessible, and the level of accessibility.  There are three options for database state
·         ONLINE– the database is accessible for user connections
·         OFFLINE – the database is shut down and can’t receive user connections
·         EMERGENCY – the database is READ_ONLY, and logging is disabled
Here are some examples in action:
Setting a database OFFLINE: (two variations for rolling back open transactions.  See code comments for descriptions of each)
use master
go
— take database immediately offline
— rollback all transactions right away
alter database YourDatabase
set offline
with rollback immediate
go
— take database offline when all open trans are
—  committed or rolled back
— rollback all transactions after 30 seconds
alter database YourDatabase
set offline
with rollback after 30
go
Setting a database ONLINE:
use master
go
alter database YourDatabase
set online
go
Setting a database in EMERGENCY:
alter database YourDatabase
set emergency
with rollback immediate
go
Database Updateability
                There are two options to control whether or not write operations can be executed on a database:

          ·         READ_ONLY – data can be read but not written
          ·         READ_WRITE– data can be read and written to (given appropriate privileges)
Put database in READ_ONLY:
alter database YourDatabase
set read_only
with rollback immediate
go
Put database in READ_WRITE:
alter database YourDatabase
set read_write
with rollback immediate
go
Database User Access
                There are no doubt times when you’ll absolutely need to change how many users and if only certain roles can connect to the database.  There are three options that control database user access:
          ·         MULTI_USER– all users are allowed to connect to the database (given appropriate privileges)
          ·         SINGLE_USER – only one user (with privileges to connect) can be connected at any given time
          ·         RESTRICTED_USER – only members of the following roles can connect to the database: db_owner (database role), sysadmin (server role), and dbcreator (server role)
Set database for MULTI_USER access:
alter database AdventureWorks2012
set multi_user
go
Set database for SINGLE_USER access:
alter database AdventureWorks2012
set single_user
with rollback immediate
go
Set database for RESTRICTED_USER access:
alter database AdventureWorks2012
set restricted_user
with rollback immediate
go
                In summary, this post has showed you numerous ways to control the way users, and who (and how many) is able to access the database.  Any comments, questions, or problems you can leave a comment below or email me at sqlsalt@gmail.com.
                As for the code to this post, it is kept in my SQLSalt GitHub repository, as follows:

The Procedure Cache

                The procedure cache is the portion of memory that SQL Server stores execution plans, to include query plans and execution contexts.  The life cycle of a statement executing starts with SQL Server first checking the procedure cache to see if a corresponding execution plan exists.  This caching technique allows the database engine to ensure the fastest amount of time for execution.  If SQL Server finds the execution plan, then it will reuse that plan without recompiling the statement at hand.  But if the database engine does not find a plan for the statement it will generate a new plan and store it in the procedure cache.
                In order to get some further details on what is stored in the procedure cache, there are a handful of useful DMVs to use in the search.  For instance, if you wanted to retrieve all of the stored procedure execution plans for a particular database, you could do this:
— substitute ‘YourDatabaseName’ with the desired db
useYourDatabaseName
go
select
      db_name(st.dbid) as database_name,
      object_name(st.objectid) as name,
      p.size_in_bytes / 1024 as size_in_kb,
      st.text
from sys.dm_exec_cached_plans p
cross apply sys.dm_exec_sql_text(p.plan_handle) st
where p.objtype = ‘proc’
and st.dbid = db_id()
                Something very interesting (and to further fortify the beauty of stored procedures and SQL Server’s ability to reuse stored procedure exec plans) is to notice the use counts on Ad Hoc plans:
select
      p.size_in_bytes / 1024 as size_in_kb,
      p.usecounts,
      s.text
from sys.dm_exec_cached_plans p
cross apply sys.dm_exec_sql_text(p.plan_handle) s
where p.objtype = ‘adhoc’
                SQL Server increments the usecountsfield every time it reuses that specific plan and is a good indicator of plan recycling.
Plan Recompilation
                There are many things that may cause plans to be recompiled.  There is a very detailed list (that I will summarize below) as a great reference from BOL.  Plans will be recompiled during the following occurances:
·         table or view changes by the referencing query plan text
·         index changes used by the referencing query (to include a dropped index)
·         statistics updates used by the referencing plan
·         sp_recompile execution
·         large number of changes to keys
·         significant growth of the number of rows in the insertedand deleted dynamic tables on tables with triggers
·         utilizing the WITH RECOMPILE option when executing a stored procedure
Working Example
            Here’s a working example using our trusty AdventureWorks database (with SQL Server 2008 R2).  Let’s first run the below query just to clear the proc cache and start from scratch (I know this goes without saying, but please do this testing on your test machine nowhere near production):
use AdventureWorks
go
— start from scratch
dbccfreeproccache
select
      db_name(st.dbid) as database_name,
      object_name(st.objectid) as name,
      p.plan_handle,
      p.size_in_bytes / 1024 as size_in_kb,
      p.objtype,
      p.usecounts,
      st.text
from sys.dm_exec_cached_plans p
cross apply sys.dm_exec_sql_text(p.plan_handle) st
where st.dbid = db_id()
and p.objtype = ‘proc’
The result set should be empty.  Now, just for the sake of getting a plan cached, run the following code:
execuspGetEmployeeManagers
      @EmployeeID = 1
go
                Now by running the same query from above (utilizing sys.dm_exec_cached_plans) WITHOUT the DBCC FREEPROCCACHE call (we don’t want to clear the cache this time), we should see results that look like this (fields omitted for brevity):
                name:                   uspGetEmployeeManagers
                plan_handle:     0x0500090076146E6C40C1389E000000000000000000000000
                usecounts:          1
                Go ahead and run the stored procedure again:
execuspGetEmployeeManagers
      @EmployeeID = 2
go
                Run the SELECT against sys.dm_exec_cached_plansagain and you’ll see results that look like this:
                name:                   uspGetEmployeeManagers
                plan_handle:     0x0500090076146E6C40C1389E000000000000000000000000
                usecounts:          2
                So the only thing that has changed is the usecounts counter has been incremented by one.  The plan_handle is the same and this just proves that SQL Server has reused this cached plan the second time we executed uspGetEmployeeManagers.
                Now what we want to do is cause a recompilation.  We will do this through the use of the system stored procedure sp_recompile:
exec sp_recompile ‘uspGetEmployeeManagers’
go
                And now when we gather our plan data again, we’ll see changed parameters like this, with a completely new plan_handleand a usecounts that starts back at one:
                name:                   uspGetEmployeeManagers
                plan_handle:     0x0500090076146E6C4061439E000000000000000000000000
                usecounts:          1
                In this blog post I have briefly explained what the procedure cache is to include what is stored there and when plans will be recompiled.  With the above example, there are endless ways to see this in action.  The useful source code for this blog post can be found here: BufferPool_PlanCache_StoredProcsForDB.sql.  If there are any comments, questions, or issues please feel free to leave a comment or email me at sqlsalt@gmail.com.