Implement Programming Objects: Write automation scripts

(This is part of the Study Guide series, 70-464)
Microsoft’s Measured Skill description: This objective may include but is not limited to: automate backup testing; shrink file; check index fragmentation; archive data; run an SQL Server Integration Services (SSIS) job; check disk space; automate backups
What I see:
·         automate backup testing
·         shrink file
·         check index fragmentation
·         archive data
·         run an SSIS job
·         check disk space
·         automate backups
Automate Backup Testing
                I wrote a script to do this.  Basically it takes a backup and restores it as a stand-alone database and then runs DBCC CHECKDB against the restored database.  It’ll report the results of the DBCC CHECKDB and then drop the database.  This script can be found here on my GitHub SQLSalt reposity.
Shrink File
                I’m actually a bit surprised that Microsoft put this in the objectives.  It is not advisable to consistently shrink database files, as this operation can cause extreme data fragmentation.  For my testing purposes, I altered the size of my MeasureMySkills database files to 10000 MB each to show the drastic differences in file shrinkage.  If you want to get database file space usage statistics for your current database, you could execute the following query:
use MeasureMySkills;
go
;with DatabaseFileInfo as
(
       select
              db_name(database_id) as database_name,
              name as db_file_name,
              size * 8 as total_size_kb,
              fileproperty(name, ‘SpaceUsed’) * 8 as space_used_kb
       from sys.master_files
       where database_id = db_id()
)
select
       database_name,
       db_file_name,
       total_size_kb,
       space_used_kb,
       total_size_kb space_used_kb as free_space_kb,
       cast(space_used_kb * 1.0 / total_size_kb * 100.0 as decimal(5, 2))
              as percent_used
from DatabaseFileInfo
This will let you know how much space is actually being used, and how much is allocated.  If for some reason you want to shrink a database file, you could do the following (I’m using my MeasureMySkills database in the following example):
— shrink my data file
dbcc shrinkfile(‘MeasureMySkills’, 0);
go
— shrink my log file
dbcc shrinkfile(‘MeasureMySkills_log’, 0);
go
Then re-running the database file space usage stats query from before, you should see regained space (for the data file, close to 100% usage and for the log file it will be bound by VLFs so your results may vary).  You can automate this process by creating a Maintenance Plan and utilizing a Shrink Database Task.  You could also use schedule a job to check a database’s file space usage, and if it drops below a certain threshold it could shrink database files (also possible cumulatively through DBCC SHRINKDATABASE).  Again, this is not recommended in production.  It is better to make an educated calculation of data scalability and create an appropriately sized database.  There’s a good saying that if space was once used by a database (i.e. it auto grew, or more space was allocated for a reason) then it will probably need it again.  Drive space is cheap; poor performance and unwieldy fragmentation can be expensive.
Check Index Fragmentation
                I wrote a script that can be found on my GitHub SQLSalt repository that basically gets a summary of index fragmentation for a particular database with a minimum page threshold.  It utilizes the sys.dm_db_index_physical_statsDMV with a join on the sys.indexes catalog view.  Running it against AdventureWorks2012:
use AdventureWorks2012;
go
declare
@page_count_min bigint= 1000;
select
       db_name(ps.database_id) as database_name,
       object_name(ps.object_id) as object_name,
       i.name as index_name,
       ps.index_id,
       ps.index_type_desc,
       ps.avg_fragmentation_in_percent,
       ps.page_count
from sys.dm_db_index_physical_stats
(
db_id(),
default,
default,
default,
‘detailed’
) ps
inner join sys.indexes i
on ps.object_id = i.object_id
and ps.index_id = i.index_id
where ps.page_count >= @page_count_min
order by ps.avg_fragmentation_in_percent desc;
Archive Data
                Archiving data can be done a few ways.  As far as automation goes, it could be something as simple as a SQL Server Agent job run daily (or weekly, or monthly) to copy data from an active table to an archive table.  This can also be done through PowerShell.  It is as simple as a DELETE FROM… OUTPUT INTO … statement.
Run an SSIS Job
                Again, running a SQL Server Agent job with an SSIS job step can be down through a SQL Server Agent schedule.
Check Disk Space
                Checking disk space can be something that is done by scheduling a job that makes a call to the master.sys.xp_fixeddrivesextended stored procedure.  This will return a row per logical drive and the remaining space in MB.  If this is below a particular threshold, then use DBMail to send an email, log it to the event log, or take whatever action you find appropriate.  Likewise, you can setup a WMI alert within SQL Server Agent with a WQL query to notify this information.
Automate Backups
                This is one of the basic steps of automation.  It is typical for a DBA to create either a Maintenance Plan to do full, differential, and transaction log backups regularly and on a schedule, or create each individual SQL Server Agent job to accomplish the same tasks.  These are simple BACKUP DATABASE/LOG statements, and it is also worth noting that these backups should be tested for integrity.
References
Code
If there are any comments, questions, issues, or suggestions please feel free to leave a comment below or email me at sqlsalt@gmail.com.
Advertisements

Implement Database Objects: Work with XML data

(This is part of the Study Guide series, 70-464)
Microsoft’s Measured Skill description: This objective may include but is not limited to: implement XML; use XML (Query, Input, Output); retrieve relational data as XML; FOR XML; design a strategy to transform XML into relational data; design a strategy to query and modify XML data; understand XML data types and their schemas and interoperability, limitations, and restrictions; implement XML schemas and handling of XML data; when and when not to use XML, including XML namespaces; import and export XML
What I see:
·         query XML
·         Retrieve relational data as XML
·         XML data types
·         XML schemas
Query XML
                Querying XML can be achieved through the use of XQuery (introduced in SQL Server 2005).  There are a number of XML functions that are used.  Below are examples of a few of them:
Example
use MeasureMySkills;
go
create table MyXmlTable
(
       id int identity(1, 1) not null,
       MyXml xml not null
);
go
insert into MyXmlTable
values
(
       ‘<root1>
       <node1>
       some text
       </node1>
       </root1>’
);
select
       id,
       MyXml.query(‘/root1/node1’) as data
from MyXmlTable;
select
       id,
       MyXml.exist(‘/root1/node1’) as data
from MyXmlTable;
select
       id,
       MyXml.value(‘(/root1/node1)[1]’, ‘varchar(40)’) as data
from MyXmlTable;
                The modify() XQuery function uses the XML DML language specification.  These functions and this XML DML can be referenced here.
Retrieve relational data as XML
                The reverse of shredding XML into relational data (above) is retrieving relational data in the form of XML.  This is done utilizing FOR XML in your queries.  The four FOR XML options are RAW, AUTO, PATH, and EXPLICIT.  Detailed descriptions and examples can be found here on BOL.
XML data types
                There is a data specifically made for XML:  It’s appropriated called xml.  Upon declaration of an xml variable, you have the option to specify the XML schema the data will conform to.  For further reading on the xml data type, reference BOL.
XML schemas
                XML schemas and namespaces are used to created “typed” XML, meaning they are defined and checked against a set of structure rules they must follow.  For more information on how to create and alter, please see BOL.
References
·         BOL reference on FOR XML
If there are any comments, questions, issues, or suggestions please feel free to leave a comment below or email me at sqlsalt@gmail.com.

Implement Database Objects: Create and modify constraints

(This is part of the Study Guide series, 70-464)
Microsoft’s Measured Skill description: This objective may include but is not limited to: create constraints on tables; define constraints; performance implications
What I see:
·         define constraints
·         create constraints
·         performance implications
Define constraints
                BOL sums this up nicely:  “Constraints let you define the way the Database Engine automatically enforces the integrity of a database. Constraints define rules regarding the values allowed in columns and are the standard mechanism for enforcing integrity.”  For a full description and a list of constraint types, see BOL.
Create constraints
                Creating constraints can be done one of a few ways.  Below shows a few examples:
Example
use MeasureMySkills;
go
if exists (select * from sys.tables where name = ‘ConstraintTest’)
       drop table ConstraintTest;
create table ConstraintTest
(
       id int identity(1, 1) not null
              check (id > 0)
);
go
drop table ConstraintTest;
go
create table ConstraintTest
(
       id int identity(1, 1) not null,
       constraint CK_Id check (id > 0)
);
go
drop table ConstraintTest;
go
create table ConstraintTest
(
       id int identity(1, 1) not null
);
go
alter table ConstraintTest
add constraint CK_Id
check (id > 0);
go
Performance implications
                I recommend reading this great article by Grant Fritchey on how foreign key constraints can actually help the optimizer in eliminating tables from the plan.
References
·         BOL reference on Constraints
If there are any comments, questions, issues, or suggestions please feel free to leave a comment below or email me at sqlsalt@gmail.com.

Implement Database Objects: Implement data types

(This is part of the Study Guide series, 70-464)
Microsoft’s Measured Skill description: This objective may include but is not limited to: use appropriate data types; develop a CLR data type; understand the difference between @Table and #table; impact of GUID (newid, newsequentialid) on database performance, indexing and privacy; use spatial data; LOB data types; understand when and how to use column store and sparse columns; implicit and explicit conversions, integer math
What I see:
·         develop a CLR data type
·         temp table vs. table var (discussed about here)
·         NEWID() vs. NEWSEQUENTIALID()
·         spatial data
·         LOB data types
·         column store
·         sparse columns
·         implicit/explicit conversions
Develop a CLR data type
                CLR User-Defined Types are created utilizing managed code and registering an assembly within SQL Server.  Then a type can be created based off of an Assembly’s class/struct.  For a full, working example on how to accomplish this, follow the walk-through on BOL for CLR User-Defined Types.  It is worth noting particular security concepts relating to the PERMISSION_SET option (can be SAFE, EXTERNAL_ACCESS, or UNSAFE).
NEWID() vs. NEWSEQUENTIALID()
                These are both functions that generate unique identifiers.  There is a very prominent difference between then though:  NEWID() create a [nonsequential] UID, and NEWSEQUENTIALID() creates a sequential UID.  When creating a key column of a clustered index, it is advantageous for that to be ever-incrementing.  If NEWID() was used for a clustered index key column then there would be page splits upon INSERT statements.  But with NEWSEQUENTIALID(), it acts more like an IDENTITY column; with incrementing values.  See the below example to prove the different natures of these functions.  Notice the identity field order for each resulting SELECT queries.  One of them is random, and the other is in a nice order:
Example
use MeasureMySkills;
go
create table NewIdExample
(
       id int identity(1, 1) not null,
       NewIdVal uniqueidentifier
              default newid()
);
go
insert into NewIdExample
values(default);
go 100
create table NewSequentialIdExample
(
       id int identity(1, 1) not null,
       NewSequentialIdVal uniqueidentifier
              default newsequentialid()
);
go
insert into NewSequentialIdExample
values(default);
go 100
select top 10 *
from NewIdExample
order by NewIdVal;
select top 10 *
from NewSequentialIdExample
order by NewSequentialIdVal;
Spatial data
                See BOL’s reference on Spatial Data for information regarding these types.
LOB data types
                See this description of LOB data types (this is from 2008, but the current ones there are still current).  Also, Kimberly Tripp’s blog post on Understanding LOB Data is a great read.
Column store
                Columnstore indexes are one of the cool new features of SQL Server 2012.  Traditionally, indexes are stored by row.  But columnstore indexes now introduce storing indexed data by column.  This can leverage great performance due to the smaller data (and compressed), as it will be less reads.  There are a lot of caveats to columnstore indexes, though:  First off, you can’t do DML against data that is included in a columnstore index.  The workaround is to drop before the INSERT/UPDATE/DELETE and then recreate.  It seems as though the design around columnstore indexes are moreso meant for data warehousing and fact tables, not so much OLTP systems (yet).  There’s no such thing as key columns with a columnstore index, so there is no 16 column limitation like we are typically used to.  Likewise, you can’t have an INCLUDE clause in the definition.  There are many more gotchas and things to know with columnstore indexes, so look to the reference at the bottom to get a full document.
Example
create columnstore index ICX_Department_Name
on HumanResources.Department(Name);
go
Sparse columns
                Sparse columns are a storage method that SQL Server utilizes to leverage saved storage space.  Typically when a column value is NULL it still takes up space.  But if a significant amount of data for a particular column is NULL, the SPARSE attribute can be used to save space as nothing will be stored when the value is NULL.  Like all things, nothing is free though.  Because of a sparse column, there will be additional overhead when dealing with non-NULL data in a sparse column.
Example
use MeasureMySkills;
go
create table MySparseTable
(
       id int identity(1, 1) not null,
       SomeString nvarchar(128) sparse null
);
go
Implicit/Explicit conversions
                An explicit conversion is evident to all of us, as it can be seen through the query with functions like CAST() and CONVERT().  But implicit conversions happen behind the scenes and can be huge performance issues.  Implicit conversions can be analyzed through execution plan predicates, and can potentially make a query non-SARGable, causing a scan when a seek is expected.
References
·         BOL reference on CREATE TYPE
·         BOL reference on CLR UDTs
·         BOL reference on NEWID()
Version-Controlled Code (GitHub)
If there are any comments, questions, issues, or suggestions please feel free to leave a comment below or email me at sqlsalt@gmail.com.

Implement Database Objects: Maintain indexes

(This is part of the Study Guide series, 70-464)
Microsoft’s Measured Skill description: This objective may include but is not limited to: inspect physical characteristics of indexes and perform index maintenance; identify fragmented indexes; identify unused indexes; implement indexes; defrag/rebuild indexes; set up a maintenance strategy for indexes and statistics; optimize indexes (full, filter index); statistics (full, filter) force or fix queue; when to rebuild vs. reorganize and index; create a tuning and maintenance strategy for proactive operations
What I see:
·         physical index characteristics
·         index maintenance
·         index usage
·         create indexes
·         optimize indexes
Physical index characteristics
                Like any other storage, indexes get fragmented over time due to DML and page splits.  The way to gather this information is through the sys.dm_db_index_physical_statsDMV.  A basic query for these stats would be:
Example
use AdventureWorks2012;
go
declare
       @page_count_min bigint= 1000;
select
       db_name(database_id) as database_name,
       object_name(object_id) as object_name,
       index_id,
       index_type_desc,
       avg_fragmentation_in_percent,
       page_count
from sys.dm_db_index_physical_stats
(
       db_id(‘AdventureWorks2012’),
       default,
       default,
       default,
       ‘detailed’
)
where page_count >= @page_count_min
order by avg_fragmentation_in_percent desc;
                Notice the data that this query returns.  The fragmentation, as well as the page count.  It is typically advised to ignore fragmentation on indexes that have a negligible page count (i.e. not such a huge impact from the fragmentation).  I have chosen a minimum of 1000 pages to consider.
Index maintenance
                Index maintenance includes cleaning up fragmented indexes that can be retrieved from the section above.  It’s a rule of thumb that any fragmentation between 5% and 30%, and index should be reorganized.  Greater than 30% should be rebuilt.  Again, this is just a rule of thumb and not to be taken as fact in all scenarios.  If your particular maintenance window is large enough, there’s no reason not to rebuild all fragmented indexes.  Likewise, if you are running Enterprise Edition you have the ability for an ONLINE index rebuild.  There are a lot of caveats to that particular method, but it does solve some index availability issues.
Index Rebuild Example
alter index PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID
on Sales.SalesOrderDetail
rebuild;
go
Index ONLINE Rebuild Example
alter index PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID
on Sales.SalesOrderDetail
rebuild
with
(
       online = on
);
go
Index Reorganize Example
alter index PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID
on Sales.SalesOrderDetail
reorganize;
go
Index usage
                Index usage can be found through querying the sys.dm_db_index_usage_stats DMV.  This DMV contains aggregated statistics of how indexes are utilized.  If an index is not contained in this DMV, it hasn’t been utilized since the last time SQL Server started or DMV stats have been cleared.  Using this DMV with a LEFT/RIGHT OUTER JOIN with the sys.indexescatalog view will yield unused indexes.  The below query shows how to get index usage statistics:
Example
select
       db_name(us.database_id) as database_name,
       object_name(us.object_id) as object_name,
       i.name,
       us.user_seeks,
       us.user_scans,
       us.user_lookups,
       us.user_updates
from sys.dm_db_index_usage_statsus
inner join sys.indexes i
on us.object_id = i.object_id
and us.index_id = i.index_id
where us.database_id = db_id();
Create indexes
                Creating indexes can be a very simple operation, but like many things in SQL Server there is a large list of possible options that can be specified.  A basic CREATE INDEX command is shown below, but reference BOL for the full list of options.
Example
use MeasureMySkills;
go
create clustered index IX_LockedTable_Id
on LockedTable(id)
go
Optimize indexes
                Indexes can be used to greatly optimize code and query performance.  Likewise, too many and useless indexes can cause a serious performance degradation with INSERT/UPDATE/DELETE statements.  But given the correct indexes, they can change a table scan to an index seek minimizing IO, CPU time, and overall duration.  The sys.dm_db_missing_index_* prefixed DMVscan be used as suggestions for indexes that can help.  Like DTA, these should be taken as suggestions not necessarily hard facts.
References
·         BOL reference on ALTER INDEX
Version-Controlled Code (GitHub)
If there are any comments, questions, issues, or suggestions please feel free to leave a comment below or email me at sqlsalt@gmail.com.

Implement Database Objects: Design the locking granularity level

(This is part of the Study Guide series, 70-464)
Microsoft’s Measured Skill description: This objective may include but is not limited to: choose the right lock mechanism for a given task; handle and/or avoid deadlocks; fix locking and blocking issues caused by previous development or third-party applications; analyze a deadlock scenario to alleviate the issue; impact of isolation level and Microsoft ADO.NET defaults; impact of locks and lock escalation; reduce locking scenarios; understand how isolation levels affect blocking and locking; identify bottlenecks in the data design and make improvements
What I see:
·         capture deadlocking
·         analyze locks
·         isolation levels and their explanations
·         ADO.NET defaults
·         lock escalation
Capture deadlocking
                There are a couple of ways to capture deadlocks.  The first one I’m going to go over is by the use of a trace flag (1204).  Coupled with trace flag 3605 you can have deadlock information written directly to the SQL Server error log.  For my testing purposes, I set up the typical deadlock scenario:  Created two tables, and used an exclusive lock on each table in two sessions, then before committing swapped them up to create my deadlock.  Here’s the output I saw in my error log:
Deadlock encountered …. Printing deadlock information
Wait-for graph
Node:1
RID: 11:1:276:0                CleanCnt:2 Mode:X Flags: 0x3
 Grant List 1:
   Owner:0x00000002F5908A80 Mode: X        Flg:0x40 Ref:0 Life:02000000 SPID:53 ECID:0 XactLockInfo: 0x00000002F2C043E8
   SPID: 53 ECID: 0 Statement Type: UPDATE Line #: 2
   Input Buf: Language Event:
       update DlTable2
       set int1 = 6
       where int2 = 1;
commit tran;
Requested by:
  ResType:LockOwner Stype:’OR’Xdes:0x00000002F2C056A8 Mode: U SPID:54 BatchID:0 ECID:0 TaskProxy:(0x00000002EB0CE608) Value:0xf5908b00 Cost:(0/144)
NULL
Node:2
RID: 11:1:278:2                CleanCnt:2 Mode:X Flags: 0x3
 Grant List 1:
   Owner:0x00000002F5908540 Mode: X        Flg:0x40 Ref:0 Life:02000000 SPID:54 ECID:0 XactLockInfo: 0x00000002F2C056E8
   SPID: 54 ECID: 0 Statement Type: UPDATE Line #: 2
   Input Buf: Language Event:
       update DlTable1
       set int1 = 3
       where int2 = 1;
commit tran;
Requested by:
  ResType:LockOwner Stype:’OR’Xdes:0x00000002F2C043A8 Mode: U SPID:53 BatchID:0 ECID:0 TaskProxy:(0x00000002EB0EA608) Value:0xf5906b80 Cost:(0/144)
NULL
Victim Resource Owner:
 ResType:LockOwner Stype:’OR’Xdes:0x00000002F2C056A8 Mode: U SPID:54 BatchID:0 ECID:0 TaskProxy:(0x00000002EB0CE608) Value:0xf5908b00 Cost:(0/144)
                Relatively interesting information; you see the two deadlock participants, as well as their statements that were running.  You can even see which SPID that SQL Server chose as the victim (in this case, 54).  Another way is through SQL Trace.  There are three event classes you can capture for deadlocks:  Lock: Deadlock, Lock: Deadlock chain, and Deadlock graph.  The Deadlock graph for the above scenario looks like this:
Analyze locks
                The overarching DMV for analyzing locks is sys.dm_tran_locks.  It gives a current view on the locks that are acquired on the instance.  Here’s a little test demo:
Example
use MeasureMySkills;
go
create table LockedTable
(
       id int identity(1, 1) not null,
       SomeString1 nvarchar(100) not null
              default replicate(‘a’, 100),
       SomeString2 nvarchar(100) not null
              default replicate(‘b’, 100)
);
go
insert into LockedTable
values(default, default);
go 100
begin tran;
       update LockedTable
       set SomeString1 = replicate(‘c’, 100)
       where id
              between 5 and 7;
–commit tran;
select
       resource_type,
       resource_description,
       resource_associated_entity_id,
       request_mode,
       request_status
from sys.dm_tran_locks;
                What the above shows us is that there are three RID locks (these would be KEY locks if this was an index) that have exclusive (X) locks on them.  Notice the intent locks (IX) of the objects that are higher in the lock hierarchy, including a page and an object (the actual table itself).  This is an optimization strategy by SQL Server to have quick notification that there is an exclusive lock (X) somewhere lower in the lock hierarchy.  The use of this DMV can show a vast amount of information pertaining to current locking issues.  Also, if you want to get a good idea if locks are causing a lot of waits, you can look at the sys.dm_os_wait_statsDMV, and if a LCK_M_* prefixed wait type is high up there then you’re experiencing a cumulatively large delay due to concurrency.
Isolation levels
                This topic can be a one sentence explanation for each isolation level, or can be very in depth due to the nature of these concurrency policies.  Due to this, I recommend reading what each of these are and learning how they prevent/allow dirty reads, non-repeatable reads, and phantom reads.
ADO.NET defaults
                I believe the answer here is READ COMMITTED.  I have written a little application to open a connection and select the trans isolation level for the current SPID and that’s what my result was.  If you have another theory, I’d love to hear it.  Nothing too evident and clear-cut while researching this topic.
Lock escalation
                Lock escalation is the process that SQL Server takes when it has too many row or page locks, and then attempts to escalate the lock to a table lock.  It will convert the IS/IX lock to the appropriate S/X lock.  There are thresholds that will cause this:
·         a statement obtains more than 5,000 row or page locks (and lock escalation isn’t disabled)
·         memory consumption of config threshold is reached
At this point, SQL Server will attempt to escalate to a table lock.  If this isn’t possible due to concurrency restrictions, the database engine will attempt this lock escalation after another 1,250 locks have been acquired.  You can disable lock escalation a few ways:
Example
alter table LockedTable
set
(
       lock_escalation = disable
);
go
dbcc traceon(1211); — or 1224
go
                The first method is at the table level.  The second is instance-wide.  Trace flag 1211 disables all lock escalation, whereas 1224 just disables it based on number of locks (but doesn’t prevent based on memory pressure).
References
Version-Controlled Code (GitHub)
If there are any comments, questions, issues, or suggestions please feel free to leave a comment below or email me at sqlsalt@gmail.com.

Implement Database Objects: Design, implement, and troubleshoot security

(This is part of the Study Guide series, 70-464)
Microsoft’s Measured Skill description: This objective may include but is not limited to: grant, deny, revoke; connection issues; execute as; certificates; loginless user; database roles and permissions; contained users; change permission chains
What I see:
·         grant, deny, and revoke permissions
·         execute as
·         certificates
·         loginless users
·         database roles
·         contained users
·         ownership chaining (and changing object owner)
GRANT, DENY, and REVOKE Permissions
                GRANT, DENY, and REVOKE do their respective action on a securable for a given principal (whether it be a login, user, role, etc.).  GRANT gives the permission, DENY prevents the principal from inheriting the permission, and REVOKE removes any GRANT or DENY on that permission for the given principal.  There are a large amount of securables that simply can’t be listed here, so see the BOL reference to skim these.
Example
— *** <DB OBJECT SETUP>    ***
use MeasureMySkills
go
create table dbo.PermTable
(
       id int identity(1, 1) not null,
       SomeData nvarchar(100) not null
              default replicate(‘a’, 100)
)
go
insert into dbo.PermTable
values(default)
go
— *** </DB OBJECT SETUP>   ***
create procedure dbo.PermProc
as
       select 1
go
create login PermLogin
with password = ‘password’,
check_policy =off
go
use MeasureMySkills
go
create user PermUser
for login PermLogin
go
grant select
on PermTable
to PermUser
go
— this will work
execute as user = ‘PermUser’
go
select *
from PermTable
revert
go
— this will not work
deny select
on schema::dbo
to PermUser
go
execute as user = ‘PermUser’
go
select *
from PermTable
revert
go
revoke select
on schema::dbo
to PermUser
go
— this will not work
execute as user = ‘PermUser’
go
exec dbo.PermProc
revert
go
— this will work
grant execute
on PermProc
to PermUser
go
execute as user = ‘PermUser’
go
exec dbo.PermProc
revert
go
EXECUTE AS
                EXECUTE AS is a clause that executes T-SQL in the specified security context.  It can also be used for functions, stored procedures, and triggers.  In the above example, to test out my security code I use the inline EXECUTE AS.
Example
use MeasureMySkills
go
— add PermUser to db_owner role
exec sp_addrolemember ‘db_owner’, ‘PermUser’
go
create procedure ExecAsDBO
with execute as ‘dbo’
as
       select user_name()
go
create procedure ExecAsCaller
with execute as caller
as
       select user_name()
go
execute as user = ‘PermUser’
go
exec ExecAsDBO
go
exec ExecAsCaller
go
revert
go
Notes
·         options for the EXECUTE AS clause can be CALLER (the caller of the code), SELF (the creator or last alter-er of the module), OWNER (current owner of the module), ‘user_name’, or ‘login_name’ (only for server DDL triggers or logon triggers)
Certificates
                Certificates are a database-level securable, as per BOL.  We will see later on their extensive use, but for this section we will just cover how to create them and a few nuances.
Example
use MeasureMySkills
go
create certificate CertTest
encryption by password = ‘certpassword’
with
       subject = ‘Test Certificate’,
       expiry_date = ‘2012-6-15’
go
Notes
·         a certificate is either created and encrypted off of a password, or the database master key.  If there is no database master key then an encryption password must be specified
·         START_DATE is the starting valid date and is optional, and if not supplied will be the current date
·         EXPIRY_DATE is how long the certificate will be valid for.  If unspecified it will default to a year after the START_DATE
Loginless Users
                Loginless users are just that:  database users that are created and not tied to an instance login.  This can be used as a security measure to impersonate based off of.
Example
use MeasureMySkills
go
create user LoginlessUser1
without login
go
Notes
·         this user is not mapped to a login, and therefore cannot connect to any other database
Database roles
                Database roles are database-level securables that can have a set of permissions and members.  They are used to group security structure off of, as opposed to granting/denying specific permissions to database users.
Example
use MeasureMySkills
go
create role db_testrole
go
alter role db_testrole
add member PermUser
go
grant execute
on schema::dbo
to db_testrole
go
Contained users
                Partially contained databases are one of the cool new features of SQL Server 2012.  Contained users are simply the users in a contained database.  There are two types (as per BOL):
·         Contained database user with password – authenticated by the database
·         Windows principals – trusted authentication by the database for Windows users of authorized Windows groups
Ownership chaining
                Ownership chaining is a very interesting security measure, but can be a bit hard to understand.  For an in depth look and a great example, take a look at the BOL reference here.  In short, an unbroken ownership chain is when the owner of a calling object is the same; therefore permission checks are not executed.  But when the owner of a calling object is different from the called object, a permission check will be performed based off of the calling user.  There is also Cross-Database Ownership Chaining, which is an instance-level configuration (disabled by default), that enables or disables ownership chaining between databases.  It poses a security hole, which is why it is disabled by default.  To change an object’s owner, simply run an ALTER AUTHORIZATION on the object.
Example
use MeasureMySkills
go
alter authorization
on PermTable
to dbo
go
select
       o.name,
       case
              when o.principal_id is null then ‘SCHEMA OWNER’
              else p.name
       end as object_owner
from sys.objects o
left join sys.database_principals p
on o.principal_id = p.principal_id
where o.name =‘permtable’
alter authorization
on PermTable
to schema owner
go
select
       o.name,
       case
              when o.principal_id is null then ‘SCHEMA OWNER’
              else p.name
       end as object_owner
from sys.objects o
left join sys.database_principals p
on o.principal_id = p.principal_id
where o.name =‘permtable’
References
·         BOL reference on EXECUTE AS
·         BOL reference on CREATE USER
·         BOL reference on CREATE ROLE
·         BOL reference on ALTER ROLE
Version-Controlled Code (GitHub)
If there are any comments, questions, issues, or suggestions please feel free to leave a comment below or email me at sqlsalt@gmail.com.