Wednesday, January 23, 2013

Query All Default Trace Files at Once

Happy Wednesday!

I wanted to share something I came across today when I was researching a way to query ALL the default traces produced by SQL Server.  I knew how to query for the path and name of the current trace, but since default traces only grow to 20MB before rolling over to a fresh one, often times the information I want to know isn't in the current trace.  What I really want is to query data from all five at one time.

To query all five default trace files I figured I was going to have to write some fancy parser that would loop through and decrement the numeric suffix in the trace file naming convention.  For example, if the current default trace is named log_409.trc, the previous four would be log_408.trc, log_407.trc, etc.  Fortunately what I'm going to share is that the names of these files doesn't matter.  If you query for the current default trace and replace the file name with just log.trc (an invalid trace file name), you will in fact be querying all five at once.  What a golden nugget!

In my example I'm going to share the script I use to query for all auto-growth and auto-shrink activities.  Keep in mind this isn't an open debate about auto-shrink - it's just an example ;-)  If you run this script using a Central Management Server, you can query a whole host of instances at the same time even if the path to their default traces differ.


DECLARE @filename NVARCHAR(4000);

-- Current default trace
SELECT  @filename = CAST(value AS NVARCHAR(4000))
FROM    ::
        FN_TRACE_GETINFO(DEFAULT)
WHERE   traceid = 1
        AND property = 2


-- Preserve the path and replace the current default trace with log.trc
SET @filename = LEFT(@filename,
                     LEN(@filename) - CHARINDEX('\',REVERSE(@filename)))
    + '\log.trc'




-- Auto shrink/growth events in the current trace file
SELECT  DB_NAME(ftg.databaseid) AS DatabaseName
       ,ftg.FILENAME
       ,te.name AS EventName
       ,MIN(ftg.StartTime) AS FirstOccur
       ,MAX(ftg.EndTime) AS LastFinished
       ,COUNT(*) AS NumOccur
       ,( SUM(ftg.IntegerData * 8) ) / 1024.0 AS AmountMB
       ,( SUM(ftg.duration / 1000) ) AS MilliSec
FROM    ::FN_TRACE_GETTABLE(@filename,DEFAULT) AS ftg
        INNER   JOIN sys.trace_events AS te ON ftg.EventClass = te.trace_event_id
WHERE   te.category_id = 2
GROUP BY DB_NAME(ftg.databaseid)
       ,ftg.FILENAME
       ,te.trace_event_id
       ,te.name
ORDER BY MIN(ftg.StartTime) DESC


Tuesday, January 22, 2013

Optimize for Ad hoc Workloads

I read a post today that talked about the instance-level setting referred to as "Optimize for Ad hoc Workloads". The post focused on how SQL Server stores either compiled plans or compiled plan stubs. Equally important to understanding the technical details of how plans are cached is knowing how much storage each cache type is consuming, whether it be stored procedures, prepared statements, ad hoc queries, etc. With that information you can make a determination as to how your instance my be best configured based on the workload data. 

Here is the query I run periodically to examine my workload:


SET NOCOUNT ON

SELECT  objtype AS [Cache Store Type]
       ,COUNT_BIG(*) AS [Total Plans]
       ,SUM(CAST(size_in_bytes AS DECIMAL(14,2))) / 1048576 AS [Total MBs]
       ,AVG(usecounts) AS [Avg Use Count]
       ,SUM(CASE WHEN usecounts = 1 THEN 1
                 ELSE 0
            END) AS [Total Plans - USE Count 1]
       ,SUM(CAST(( CASE WHEN usecounts = 1 THEN size_in_bytes
                        ELSE 0
                   END ) AS DECIMAL(14,2))) / 1048576 AS [Total MBs - USE Count 1]
FROM    sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY [Total MBs - USE Count 1] DESC

DECLARE @AdHocSizeInMB DECIMAL(14,2)
   ,@TotalSizeInMB DECIMAL(14,2)

SELECT  @AdHocSizeInMB = SUM(CAST(( CASE WHEN usecounts = 1
                                              AND LOWER(objtype) = 'adhoc'
                                         THEN size_in_bytes
                                         ELSE 0
                                    END ) AS DECIMAL(14,2))) / 1048576
       ,@TotalSizeInMB = SUM(CAST (size_in_bytes AS DECIMAL(14,2))) / 1048576
FROM    sys.dm_exec_cached_plans 

SELECT  @AdHocSizeInMB AS [Total MBs - USE Count 1]
       ,@TotalSizeInMB AS [Total MBs - entire cache]
       ,CAST(( @AdHocSizeInMB / @TotalSizeInMB ) * 100 AS DECIMAL(14,2)) AS [% of cache occupied by adhoc plans only used once]
IF @AdHocSizeInMB > 200
    OR ( ( @AdHocSizeInMB / @TotalSizeInMB ) * 100 ) > 25  -- 200MB or > 25%
    SELECT  'Switch on Optimize for Ad hoc Workloads as it will make a significant difference' AS [Recommendation]
ELSE 
    SELECT  'Setting Optimize for Ad hoc Workloads will make little difference' AS [Recommendation]
GO

Monday, January 21, 2013

Query the SQL Server Error Logs

Part of my standard work as a DBA is keeping tabs on the various logs, whether it be the Windows Event log or either of SQL Server's error/agent logs. Below is a handy script I use so I can further filter the results of sp_readerrorlog while preserving related, relevant errors as well. As is the case with any repetitive activity, automating this by creating and subscribing to an SSRS report on a weekly basis is highly recommended. Another option is to set up a Central Management Server and run the query against a number of instances - saves you from creating one report for each instance.



IF OBJECT_ID(N'tempdb..#ERRLOG') IS NOT NULL 
    BEGIN
        DROP TABLE [#ERRLOG]
    END
CREATE TABLE #ERRLOG
    (
     LogDate DATETIME
    ,ProcessInfo NVARCHAR(512)
    ,[Text] NVARCHAR(4000)
    )

INSERT  INTO [#ERRLOG]
        ( 
         [LogDate]
        ,[ProcessInfo]
        ,[Text] 
        )
        EXEC [master].sys.[sp_readerrorlog] @p1 = 0,@p2 = 1,@p3 = NULL,
            @p4 = NULL

CREATE CLUSTERED INDEX IX_#ERRLOG_LogDate ON [#ERRLOG] ( LogDate )

SELECT  e2.[LogDate]
       ,e2.[ProcessInfo]
       ,e2.[Text]
FROM    [#ERRLOG] AS e
        LEFT JOIN [#ERRLOG] AS e2 ON e2.LogDate = e.[LogDate]
                                     AND e2.ProcessInfo = e.[ProcessInfo]
WHERE   e.[Text] LIKE N'%ERROR:%'
        AND DATEDIFF(d,e2.LogDate,CURRENT_TIMESTAMP) < 2
ORDER BY e2.[LogDate] DESC

IF OBJECT_ID(N'tempdb..#ERRLOG') IS NOT NULL 
    BEGIN
        DROP TABLE [#ERRLOG]
    END

Thursday, January 10, 2013

SQL Server Row and Page Compression

One feature I haven't taken advantage of is SQL Server's Row and Page Compression.  The reason I've decided to look into it today is because - all things considered - in most of my instances, IO is one area I could improve performance.  Coupled with the fact that the CPU utilization hardly nears 30% and we've got the perfect situation to use compression.  Enabling compression on tables (clustered indexes) and non-clustered indexes will add a bit of CPU overhead, but when enabled on the appropriate objects that overhead will likely be offset by reducing IO.

One tool I had never head of is called the SQL Server Compression Estimator.  Prior to today I was using some scripts that included usage of sp_estimate_data_compression_savings.  This is a handy little tool that allows you to point to an instance/database and upon setting some bare bones configuration you are off and running.  The tool will give you the current size of each object, compressed size, percent savings and if you choose it will also give you the update ratio.  You will want to do additional analysis of objects with a high update ratio - compressing those objects can actually have an adverse effect on performance (a very active logging table that is never read from, for example).

One gotcha that I'm real glad I read was the fact that compression is only supported in Developer or Enterprise Editions.  That not only applies to the instances where you enable compression, but keep in mind the editions of SQL Server you may restore to in the event of a disaster, or what version you may be running locally on your laptops (in case you pull copies of databases and restore for development purposes).  I have verified this by creating a database on my development box and tried to restore on one of my servers running Standard Edition - the following error will occur:


Wednesday, January 9, 2013

SQLIO - Analyze results with SSRS

If you are a SQL Server DBA and haven't heard of or used SQLIO, there is a great post by Brent Ozar that walks you through that process.  A noted in the post, SQLIO really has nothing to do with SQL Server in particular; rather it's a free utility from Microsoft that measures storage IO performance.


Now, once you've run those massive, time consuming scripts with scores of parameter combinations, how can you easily decipher the data?  Since I knew I would be using this process many times in my career, I decided to put together an SSRS report - a picture is often worth a million words.  If you are interested in getting a copy of the RDL, please let me know and I would be more than happy to share it with you.

Since the report is rather large, I'll share sections at a time.

First, the header area with various parameters you can select:


Once you select your various parameters, there are three core metrics you are interested in: IOPS, Latency and Throughput.

IOPS:





















Latency:





















Throughput:




















The on the bottom I do share the raw output with sortable headers - comes in handy:


Tuesday, January 8, 2013

Monitoring SQL Server Auto-Growths

On a periodic basis I'll do some analysis of data and transaction log file growths to see if my auto-growth increments should be adjusted.  I realize some prefer to not enable auto-growth, but I'm fine allowing my data files to grow automatically - I just make sure I monitor and adjust accordingly.

This is the script I run periodically to get a high-level summary of auto-growths in my current default trace file - if you use a Central Management Server, you could run this against all your instances at once.


The results will look something like this - I have removed my database names: