Friday, April 23, 2010

Implicit Data Type Conversion

Hello all,

Yesterday by accident I ran into a query running slow. In this case the query is generated by Nhibernate. After looking at the query and execution plan I figured out that it was the implicit data type conversion.

The data in the table is stored in column of varchar data type. The column is a primary key with clustered index. However the query is issued against the table with a parameter passed in and it's declared as NVARCHAR. Do you see where this is going? So when the query is executed with nvarchar parameter sql server does an index scan on the clustered key and converts each column data to nvarchar and then compares the result against the parameter passed in. As we all know clustered index scan is equal to table scan.

I changed the query and declared the parameter as varchar and ran it. It worked as expected. There was no need to do a data type conversion so SQL Server used Clustered Index Seek operator and completed the operation 4 times faster than the slow query. In my case the table I am talking has 199K rows. Imagine how much slower it could be if it had millions of rows.

I talked to principal architect and the solution was to update Nhibernate mappings. He said that Nhibernate by default maps to nvarchar data type and if you're not using nvarchar for unicode characters in your table you definitely want to map Nhibernate to your tables properly using correct data type.

HTH,

MileHighDBA

Wednesday, April 21, 2010

SQL Server 2008 R2 Released To Manufacturing

Here is the announcement from Microsoft Data Platform blog that SQL Server 2008 R2 has been released to manufacturing.

Thursday, April 15, 2010

SQL Server 2008 R2

Here is the link to free ebook download from Microsoft. Start learning before the product releases on May 2010.

SQL Server Trimming miliseconds (Zero out miliseconds)

Hi,
Where I work we needed to zero out the milliseconds from a datetime field in one of the database. I found out that somebody created function below which returns the passed datetime with the milliseconds zero out. Instead of 2010-04-15 14:50:49.953 this time we needed to return 2010-04-15 14:50:49.000. When I looked at the function I thought there is just to much DATEPART, CONVERT and concatenation being used. And I asked myself If I could make it much shorter, cleaner. So I did come up with a result and it works. Here is the original function:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
CREATE FUNCTION [dbo].[TrimMilliseconds] (@D datetime)
RETURNS datetime
AS
BEGIN
DECLARE @Date datetime
SET @Date = CONVERT(datetime,
CONVERT(varchar(4), DATEPART(yyyy, @D)) + '-' +
CONVERT(varchar(2), DATEPART(mm, @D)) + '-' +
CONVERT(varchar(2), DATEPART(dd, @D)) + ' ' +
CONVERT(varchar(2), DATEPART(Hh, @D)) + ':' +
CONVERT(varchar(2), DATEPART(mi, @D)) + ':' +
CONVERT(varchar(2), DATEPART(ss, @D)))

return @Date
END
GO

Here is my updated version:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
CREATE FUNCTION [dbo].[TrimMillisecondsNew] (@D datetime)
RETURNS datetime
AS
BEGIN
DECLARE @Date datetime
SET @Date = CAST(CONVERT(VARCHAR(19),@D,20) AS DATETIME)

return @Date
END
GO

As you see the second function is much cleaner and simpler. I only used convert and cast functions once to do the work. Less typing, less t-sql, work is done.
Reblog this post [with Zemanta]

Tuesday, April 6, 2010

24 Hours Of Pass 2010

Check the link to obtain information and register for 24 hours of Pass 2010. This is 24 hours of back to back SQL Server related webcasts starting on May 19 at 12:00 GMT(UTC). There are a lot of good presentations available go ahead, check it out and register now. I don't think I can watch all but I hope I will have time for most.

Friday, April 2, 2010

MSSQL Replication 'Field Size Too Large'

Back from a vacation and I am ready to blog at least once a week. Last week there was an update to a large table which was involved in transactional replication. We wanted to move from using GUID as primary key and implement integer based identity key. So I had to stop publishing the table to be able to drop the primary key and add a new column, populate it create the primary key on the new column.

So we had the script executed to do the job. The I used the same script which alters the table to make the necessary changes in the subscriber database. Because the table in subscriber was in different file group with different indexes and I did not want to the replication drop and recreate the table itself (which is default). I then initiated the replication with a new snapshot and waited for 5 minutes. Then I saw the error message and found out that it's complaining about the field size too larger. How can that be? Both tables are exactly same for the basic schema. I tried to create a new snapshot with the same ugly error message. Then I decided to script out my additional indexes in subscriber and dropped the table. I created the table in subscriber from a script created in the publisher with my additional indexes in the specific file group I want. The I set the publication to create a new snapshot with leaving the target table in subscriber left as it is. And that did it.

Hope this helps.