Report Time in proper format

I was working on a restore script to test our backups and also, made a reporting services report which will list out what backups have been used to restore the database,how long each one took and size of each backup. While the report was very helpful, it was kind of very hard to understand the restore time and backup sizes. The main reason for this is because some databases are kind of very small compared to others,so,their backup sizes and restore times are much lesser than others.

Below post is on how I addressed restore time issue in my report.I also made similar function to report on backup size and you can read that HERE.

I was initially reporting the time in secs but as mentioned the bigger databases were taking almost 2 hrs to restore, so, restore time shows this as 7200. It is easy to understand 7200 seconds as 2 hrs. So, to fix this, I made a scalar function which takes input time in seconds and returns time in hh:mm:ss format.

For Example: If restore took 240 seconds, it will return 00:04:00.
If restore took 7200 seconds, it will return 2:00:00

Below is the function:

USE [MSDB]
GO
CREATE FUNCTION [dbo].[fncTime] (@TimeIn int)
RETURNS varchar(11)
AS
Begin
declare @Hr int
declare @hold int
declare @TimeOut varchar(11)
IF (@TimeIn<60)
Begin
select @Timeout = '00:00:'+right('0'+cast(@TimeIn as varchar(2)),2)
End
IF (@TimeIn>=60 and @TimeIn<=3600)
Begin
select @TimeOut = '00:'+right('0'+cast((@TimeIn/60) as varchar(2)),2) +':'+right('0'+cast((@TimeIn%60) as varchar(2)),2)
End
IF (@TimeIn>=3600)
Begin
select @Hr=@TimeIn/3600
select @Hold =@TimeIn%3600
IF (@Hold)>0
Begin
select @TimeOut = case when @Hold>60 then cast(@hr as varchar(5)) +':'+
right('0'+cast((@Hold/60) as varchar(2)),2)+':'+
right('0'+cast((@Hold%60) as varchar(2)),2)
else cast(@hr as varchar(5)) +':00:'+right('0'+cast((@Hold%60) as varchar(2)),2) End
End
Else
Begin
Select @TimeOut = cast(@hr as varchar(5)) +':00:00'
End
End
RETURN @TimeOut
End

Output : select msdb.dbo.[fncTime](45085) as [RestoreDuration(hh:mm:ss)]

Capture

List of all replicated objects from Publisher DB

The below query will list the objects that are being replicated as well as the subscriber details. This query must be run on the publisher DB. To find the list of published databases on the server, we can use the below query:

Select Name from master.sys.databases where is_published =1
Capture121212
Now, we can find the list of objects from the published Database..

USE [PushRepl]
GO
With ReplicationObjects as
(Select pubid,artID,dest_object,dest_owner,objid,name from sysschemaarticles
union
Select pubid,artID,dest_table,dest_owner,objid,name from sysarticles)

Select Serverproperty('ServerName') as [PublisherServer],
B.name as [PublisherName],DB_Name() as [PublisherDB],
E.Name+'.'+A.Name as [PublisherTableName],D.Type_desc,
A.dest_owner+'.'+A.dest_Object as [SubscriberTableName],
C.dest_db as [SubscriberDB],C.srvname as [SubscriberServer]
From ReplicationObjects A
Inner Join syspublications B on A.pubid=B.pubid
Inner Join dbo.syssubscriptions C on C.artid=A.artid
Inner Join sys.objects D on A.objid=D.Object_id
Inner Join sys.schemas E on E.Schema_id=D.Schema_id
Where dest_db not in ('Virtual')

sample output(I removed couple columns in the pic because the pic is not showing up correctly.):
Capture121212

Query to find row counts for all tables in a DB.

Row Counts for all Tables in a database:

select DB_NAME()+'.'+C.NAME+'.'+A.Name as [TableName],B.TotalRows from sys.objects A Inner Join
(Select sum(rows) as TotalRows,object_ID from sys.partitions group by object_Id) b on A.object_id=B.object_id
INNER JOIN Sys.schemas C on C.SCHEMA_ID=A.schema_id
where type= 'U' order by C.Name,A.Name

Note: I have seen some people mention online that the only accurate way of finding the row count is by doing count on the actual table itself.I could not verify this but may be something to keep in mind.

Query to find the Log File Internals

The following script helps in understanding the log file. It gives all the details about transaction log file such as no of VLF’s associated,log internal space usage,last log backup taken,DB recovery mode etc. I think this script helps in understanding the internal log file usage and can be used as a health check script.

/*Databases that are set to 'AutoClose' are not considered by dbcc sqlpref(logspace) and Offline databases are not considered.[FreeSpace_IN_LogFile_toShrink_InBytes] means the amount of shrinkable Log File space in bytes.It calculates this by summing the inactive VLF's from bottom up till it hits active VLF's(Status=2).This does not mean you *can* shrink the Log File.
Please use caution. This is only for informational purposes.
Also, added code to check SQL version because dbcc loginfo on sql 2012 and above returns one extra column.*/

If object_ID('Tempdb.dbo.#TempMain') is not null
Begin
Drop Table #TempMain
End

Create Table #TempMain (
SeqID int Identity(1,1),[Database Name] varchar(max),[Log Size(MB)] decimal(15,3),[Log Space Used(%)] decimal(15,3),[Status] int,UsedLogFileSize_InMB decimal(15,3),No_of_Active_VLFs int,
UnUsedLogFileSize_InMB decimal(15,3),No_Of_InActive_VLFs int,
ReasonFor_LogFile varchar(50),LastLogBackupTaken datetime,Db_RecoveryMode varchar(11),[FreeSpace_IN_LogFile_toShrink_InMB] decimal(15,3))

If object_ID('Tempdb.dbo.#Temp') is not null
Begin
Drop Table #Temp
End

Create Table #Temp (ID int Identity(1,1),RecoveryUnitId int,FileID tinyint,FileSize bigint,StartOffset varchar(max),FSeqNo int,[Status] tinyint,Parity tinyint,CreateLSN varchar(max))
declare @SQlLogSpace nvarchar(max)
set @SQlLogSpace = 'dbcc sqlperf(logSpace)'

Insert #TempMain([Database Name],[Log Size(MB)],[Log Space Used(%)],[Status])
Execute(@SQlLogSpace)

update #TempMain set Db_RecoveryMode =A.recovery_model_desc,
ReasonFor_LogFile=Log_reuse_wait_desc,LastLogBackupTaken=C.[LastLogBackupTime]
from Sys.databases A INNER JOIN #TempMain B on B.[Database Name]=A.Name
LEFT OUTER JOIN (Select max(backup_finish_date) as [LastLogBackupTime],database_name from msdb.dbo.backupset A
INNER JOIN Sys.databases B on B.Name=A.Database_Name
where type ='L' and B.recovery_model_desc in ('FULL','Bulk-Logged')
group by database_name) C on C.[database_Name]=A.Name

update #TempMain set UsedLogFileSize_InMB=cast(cast(([Log Size(MB)]) as decimal(20,3))*cast(([Log Space Used(%)]/100) as decimal(20,3))as decimal(20,3)),
unUsedLogFileSize_InMB=cast(([Log Size(MB)]) as decimal(20,3))-cast(cast(([Log Size(MB)]) as decimal(20,3))*cast(([Log Space Used(%)]/100) as decimal(20,3))as decimal(20,3))
from #TempMain

declare @a int,@b varchar(max)
select @a =min(SeqID) from #TempMain
While (@a>=1)

Begin
Select @b=[Database Name] from #TempMain where seqID=@a
declare @Sql nvarchar(max)
IF (Substring(cast(SERVERPROPERTY('productversion') as varchar(16)),1,
CHARINDEX('.',cast(SERVERPROPERTY('productversion') as varchar(16)),1)-1))>10
Begin
Set @sql = 'USE ['+@B +'] declare @SQlLogInfo nvarchar(max)
set @SQlLogInfo = ''dbcc loginfo'' Insert into #Temp( RecoveryUnitId,FileID,FileSize,StartOffSet,FSeqNo,Status,Parity,CreateLSN)
Execute(@SQlLogInfo)'
End
Else
Begin
Set @sql = 'USE ['+@B +'] declare @SQlLogInfo nvarchar(max)
set @SQlLogInfo = ''dbcc loginfo'' Insert into #Temp( FileID,FileSize,StartOffSet,FSeqNo,Status,Parity,CreateLSN)
Execute(@SQlLogInfo)'
End

Execute Sp_Executesql @SQL

Update #TempMain Set No_Of_InActive_VLFs =B
From (Select sum(FileSize) as A ,Count(FileID) as B from #Temp where [status] = 0) A where #TempMain.SeqID=@a

Update #TempMain Set No_Of_Active_VLFs = B
From (Select sum(FileSize) as A ,Count(FileID) as B from #Temp where [status] = 2) A where #TempMain.SeqID=@a

Update #TempMain Set [FreeSpace_IN_LogFile_toShrink_InMB] = A
From (select cast((cast(sum(FileSize) as decimal(15,3))/1024.00/1024.00) as decimal(15,3)) as A from #Temp where ID>(Select Max(ID) from #Temp where Status=2)) A where #TempMain.SeqID=@a

Truncate table #Temp

IF (@a<(Select Max(seqId) from #TempMain))
Begin
Set @a=@a+1
continue
End
Else
Begin
select SeqID,[Database Name],Db_RecoveryMode,[Log Size(MB)],
[Log Space Used(%)],UsedLogFileSize_InMB,No_Of_Active_VLFs,
UnUsedLogFileSize_InMB,No_Of_InActive_VLFs,[FreeSpace_In_LogFile_ToShrink_InMB],
ReasonFor_LogFile,LastLogBackupTaken from #TempMain
Return
End
Drop table #TempMain
End

Detailed Database and DB File Sizes Info

The following code provides information about Database and related files\filegroup sizes along with physical path location

--Offline DB's are not considered---
If Object_ID('tempdb..#tmpFileSizes') is not null
BEGIN
Drop Table tempdb..#tmpFileSizes
END
Create table #tmpFileSizes (DBName varchar(100),DBFileGroup varchar(200),FileID int,DBLogicalFileName varchar(200),DBPhysicalFilePath varchar(max))
GO
sp_msforeachdb 'USE [?] Insert into #tmpFileSizes select DB_NAME(),isnull(B.GroupName,''LOG''),A.FileID,A.Name,A.FileName from sys.sysfiles A LEFT OUTER JOIN sys.sysfilegroups B on A.groupID=B.groupID'

---- This query gives individual file sizes per filegroup for each DB--------
Select DB_Name(database_id)as DBName,B.DBFileGroup,B.DBLogicalFileName,DBPhysicalFilePath,
Sum(size_on_disk_bytes)/1024/1024 as [File_SizeIN_MB],NULL as [FG_SizeIN_MB],NULL as [DB_SizeIN_MB]
from sys.dm_io_virtual_file_stats(null,null) A INNER JOIN #TmpFileSizes B on
DB_Name(A.database_id)=B.DBName and A.File_id=B.fileID
group by db_name(database_id),B.DBLogicalFileName,B.DBFILEGROUP,DBPhysicalFilePath
Union
/*This Query groups the FileGroups and gives the Total Size per FileGroup. This will only group if the filegroup contains more than one file in it, else will show the individual file size in the [File_Size_in_MB] Column because the filegroup size and file size are same.*/
Select DB_Name(database_id)as DBName,+B.DBFileGroup+ ' Total FG Size',NULL as DBLogicalFileName,NULL as DBPhysicalFilePath,
NULL as [File_SizeIN_MB],Sum(size_on_disk_bytes)/1024/1024 as [FG_SizeIN_MB],NULL as [DB_SizeIN_MB]
from sys.dm_io_virtual_file_stats(null,null) A INNER JOIN #TmpFileSizes B on
DB_Name(A.database_id)=B.DBName and A.File_id=B.fileID
group by db_name(database_id),B.DBFILEGROUP
having count(DBLogicalFileName)>1
Union
---- This Query gives the overall Database Size---
Select DB_Name(database_id) + ' Total DB Size'as DBName,NULL as DBFileGroup,NULL as DBFileName,NULL as DBPhysicalFilePath,
NULL as [File_SizeIN_MB],NULL as [FG_SizeIN_MB],Sum(size_on_disk_bytes)/1024/1024 as [DB_SIZEIN_MB]
from sys.dm_io_virtual_file_stats(null,null) A INNER JOIN #TmpFileSizes B on
DB_Name(A.database_id)=B.DBName and A.File_id=B.fileID
group by db_name(database_id)
order by DBName,DBFileGroup ,DBLogicalFileName desc,[FG_SIZEIN_MB]

If Object_ID('tempdb..#tmpFileSizes') is not null
BEGIN
Drop Table tempdb..#tmpFileSizes
END

Database and DB File sizes with Rollup

Database and DB Files Size with Roll Up

Below query provides info on both total and individual database and it’s files.

select case when (Grouping(db_name(database_id))=1) then 'ALL DB''s' else DB_NAME(database_id) end as DB,
case when (Grouping((cast (file_id as varchar(3))))=1)
then isnull(DB_NAME(database_id),'ALL') +' DB' else (cast(file_id as varchar(3))) end as DBFiles,
sum(size_on_disk_bytes)/1024 as [SizeIN_KB]
from sys.dm_io_virtual_file_stats(null,null)
group by db_name(database_id),cast(file_id as varchar(3)) with rollup

Creation of procs,udfs and views

Creation of procs,udfs and views
While creating stored procedures and functions,the parser only checks for syntax and once the syntax has been cleared by the parser, the proc is created.This does not necessarily mean, the procedure is going to work. The reason is that parser does not check if all the referred objects are correct.For example,if the columns and\or tables mentioned inside procedure might not exists but still, if T-sql syntax is correct, creation of the procedure will succeed.

Obviously, while executing the procedure, it will try to generate an execution plan and that’s where it figures out *mistake* and throws out an error. The same thing applies for user defined functions.

But, it is different in Views, if you mistype a column or table name, the creation of the View fails. I think the reason for this,even though the view does not hold any data, it materializes the definition of view. So, this step requires both validating the syntax and table references.

Need for TempDB Log File

While working on some issue, it struck to me on “why we need log File for TempDB”. Generally, we need log file during crash recovery to rollback or roll-forward the transactions based on whether they are committed or not at that time. This helps the database to be transactionally consistent. Of course, we need log file so we can take log backups and this helps us to make  point in time restoration. PS: Log file keeps track of many other events that happen within the user database.

But neither of these points apply for TempDB as it always gets recreated every time the instance restarts. I did some research and found out that we need TempDB log file for ‘ROLLBACKS’.Apparently, I was not thinking clearly.(It was always makes sense, when you know the answer). Also, I learnt that Logging in TempDB is different than User databases and logging is lighter and faster in TempDB.

So, if we have to create a temp table, it lot better to create on TempDB than USer DB as it is much faster because of less logging

Script to Compress data across all tables in a database

Script to Compress data across all tables in a database:

/*Create a Temp Table to hold the Compression information.*/
Create Table #TestCompression
(ObjectName varchar(20),
SchemaName varchar(20),
indexid int,
partitionNumber int,
size_with_current_compression_setting int,
size_with_requested_compression_setting int,
sample_size_with_current_compression int,
sample_size_with_requested_compression int)

/*Insert estimated data compression info into the Temp table. So, we can query for individual tables on savings. We are using Page Compression here.*/

USE [<<Database Name>>]
GO
Select 'INSERT INTO #TESTCompression
Execute sp_estimate_data_compression_savings ''' +
B.Name+''','''+A.Name+''',NULL,NULL,''PAGE'''
from sys.objects A INNER JOIN
sys.schemas B on A.Schema_id=B.Schema_id
where type = 'U' and A.name not like 'dtproperties'

/*Sum the [Current Compression Size] and [Requested Compression Size], to figure out the compression we are going to get.We can also query for individual tables,to see which one yields better compression.In this example, I am finding only total savings. */
Select
sum(size_with_current_compression_setting) as [AsofNOWSizeInKB],
sum(size_with_requested_compression_setting) as [RequestedSizeinKB]
from #TestCompression

/*Finally, apply the compression. the below script is for Page Compression.Replace page with row, for row compression.*/
USE [<<Database Name>>]
GO
Select
'ALTER TABLE ['+B.Name+'].['+ A.Name
+ '] REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)'
from sys.objects A INNER JOIN
sys.schemas B on A.Schema_id=B.Schema_id
where type = 'U' and A.name not like 'dtproperties'

Clearing Plan Cache

Clearing Plan Cache

SYNTAX :  DBCC FREEPROCCACHE

We can also clear the plan cache for a specific database or query.

To remove all the plans related a database
SYNTAX : DBCC FLUSHPROCINDB(<databaseid>)
EXAMPLE : DBCC FLUSHPROCINDB(7)

To remove plan for a specific query, pass the plan_handle as parameter.

SYNTAX : DBCC FREEPROCCACHE(<plan_handle>)
EXAMPLE : DBCC FREEPROCCACHE(0x05000700210F020740A1DBAA000000000000000000000000)