The EVENTDATA() Function

    The use of the EVENTDATA() function within SQL Server allows us to extract valuable and necessary information pertaining to auditing and triggers, such as Event Notifications and DDL triggers.  The Database Engine strategically provides the capturing mechanism with a handful of well-structured data.

What is the format of this data?
    The format of the provided data is XML.  When working with EVENTDATA(), one of the best tools that you can use is the XML Schema Definition to reference when looking for the elements you’d like to query and capture.  This XSD can be found at the following link: SQL Server EVENTDATA() XSD file.  This definition is dauntingly vast, but there’s a neat little trick.  In order to quickly search through this XML Schema Definition file, you simple need to search for the keyword “EVENT_INSTANCE_EventType“.  For instance, say you are creating a DDL Trigger for the CREATE_PROCEDURE event.  Search for the text “EVENT_INSTANCE_CREATE_PROCEDURE”, and you will be brought to your desired event and containing elements:

  <!– Basic Envelope –>
  <xs:element name=”EventType” type=”SSWNAMEType”/>
  <xs:element name=”PostTime” type=”xs:string”/>
  <xs:element name=”SPID” type=”xs:int”/>
  <!– Server Scoped DDL –>
  <xs:element name=”ServerName” type=”PathType”/>
  <xs:element name=”LoginName” type=”SSWNAMEType”/>
  <!– DB Scoped DDL –>
  <xs:element name=”UserName” type=”SSWNAMEType”/>
  <!– Main Body –>
  <xs:element name=”DatabaseName” type=”SSWNAMEType”/>
  <xs:element name=”SchemaName” type=”SSWNAMEType”/>
  <xs:element name=”ObjectName” type=”SSWNAMEType”/>
  <xs:element name=”ObjectType” type=”SSWNAMEType”/>
  <xs:element name=”TSQLCommand” type=”EventTag_TSQLCommand”/>
This is the bulk of the information you’ll need in order to start utilizing the EVENTDATA() function.
Using the EVENTDATA() function
    Say you want to create a DDL Trigger in order to gather event information for the CREATE PROCEDURE command.  In the aforementioned paragraph, we already have laid out the elements that we can use and capture, as well as their names and types.  In the interest of an example, let’s create a database and some basic objects to illustrate this trigger.
use master;
create database EventDataDemo;
use EventDataDemo;
create table DdlAudit
       EventType nvarchar(128) null,
       DatabaseName nvarchar(128) null,
       SchemaName nvarchar(128) null,
       ObjectName nvarchar(128) null,
       LoginName nvarchar(128) null,
       UserName nvarchar(128) null,
       SqlText nvarchar(1024) null,
       AuditDateTime datetimenull

Now that we have the audit table setup, we can create the DDL Trigger that will be capturing and handling the CREATE PROCEDURE event.  This is one of those times where SQL Server XML knowledge comes in handy, as we’ll be relying heavily on the use of the XML value() method and an XQuery to extract our desired data.
create trigger DdlCreateProc
on all server
for create_procedure
       declare @eventdata xml = eventdata();
       insert into EventDataDemo.dbo.DdlAudit
              @eventdata.value(‘(/EVENT_INSTANCE/EventType)[1]’, ‘nvarchar(128)’),
              @eventdata.value(‘(/EVENT_INSTANCE/DatabaseName)[1]’, ‘nvarchar(128)’),
              @eventdata.value(‘(/EVENT_INSTANCE/SchemaName)[1]’, ‘nvarchar(128)’),
              @eventdata.value(‘(/EVENT_INSTANCE/ObjectName)[1]’, ‘nvarchar(128)’),
              @eventdata.value(‘(/EVENT_INSTANCE/LoginName)[1]’, ‘nvarchar(128)’),
              @eventdata.value(‘(/EVENT_INSTANCE/UserName)[1]’, ‘nvarchar(128)’),
To test out our audit, simple create a stored procedure and query the DdlAudit table.
use EventDataDemo;
create procedure dbo.MyTestProcedure
       select 1;
select *
from DdlAudit;

    The above example shows a good use of the EVENTDATA() function, and how you can extract valuable information from it under given circumstances.  If there are any comments, questions, or issues please feel free to leave a comment below or email me at

Optimize Queries

(This is part of the Study Guide series, 70-457)
Microsoft’s Measured Skill description: This objective may include but is not limited to: understand statistics; read query plans; plan guides; DMVs; hints; statistics IO; dynamic vs. parameterized queries; describe the different join types (HASH, MERGE, LOOP) and describe the scenarios in which they would be used
What I see:
·         understand statistics
·         query hints
·         statistics IO
·         join types (HASH, MERGE, LOOP)
Understand Statistics
                Statistics are the way that SQL Server records and uses the data distribution for tables and indexes.  They allow the query optimizer to choose an appropriate plan based off of row count, histograms, or page density.  Fresh statistics are necessary for the process to make the best possible decision, but stale statistics can fool SQL Server into thinking it has found the best plan, when in fact it is a sub-optimal plan.  For a great read and more information on statistics, see this Idera post by Donabel Santos on Understanding SQL Server Statistics.
Query Hints
                Query hints are a way to tell the optimizer what to do, regardless of what the optimizer might have done originally.  A few popular query hints are KEEP PLAN, MAXDOP, OPTIMIZE FOR, and RECOMPILE.  For instance, MAXDOP will override the configured instance max degree of parallelism.  RECOMPILE will cause SQL Server to discard the query execution plan after the query has completed as opposed to persistently storing it for later use.  Please see BOL for a full list of Query Hints and corresponding explanations.  All of these query hints are probably fair game on the exam, so a cursory knowledge of what they do will benefit you.
                The set statement, SET STATISTICS IO, is used to output statistics regarding disk activity for the executed T-SQL queries.  To see a working example of this, execution the below T-SQL and view the Messages window to see the disk/cache activity:
use AdventureWorks2012;
set statistics io on;
select *
from HumanResources.Department;
set statistics io off;
This gives us information such as scan count, logical reads (from the data cache/memory), physical reads (from disk), read-ahead reads (read from disk to cache for future page reads), and the LOB equivalents to the aforementioned statistics.  This is a great way to see if a query or a subset of queries is hitting the disk too often.
Join Types
                There are three particular join types the optimizer can choose to utilize:
Hash Join – this join takes the smaller of the two sets to join and makes a hash table and fits that in the memory grant.  Then it takes the other set and probes by computing a hash value for each row and comparing it to the hash table.  To see this join in action, utilize the following query (notice a relatively small table that can easily fit into memory as a hashed table):
use AdventureWorks2012;
select *
from HumanResources.Department d
inner join HumanResources.EmployeeDepartmentHistory edh
on d.DepartmentID = edh.DepartmentID;
The execution plan should look like the following:
Merge Join – this join goes through the inputted rows only once, and this can show performance gains through sorted data:
use AdventureWorks2012;
select *
from Person.Person p
inner join Person.BusinessEntity be
on p.BusinessEntityID = be.BusinessEntityID;
The execution plan will resemble the following:
Loop Join – this join does just as its name states: one of the data sets will have every row of data iterated for each row of the other data set:
use AdventureWorks2012;
from Person.Person p
inner join Person.BusinessEntityAddress bea
on p.BusinessEntityID = bea.BusinessEntityID
where bea.AddressTypeID = 5;
The execution plan for the above query:
If there are any comments, questions, issues, or suggestions please feel free to leave a comment below or email me at