Vikram Kamath

October 9, 2011

T-SQL Bites

Filed under: SQL Server — Vikram @ 12:04

Check if a table with the specified name exists or not.

select obj.*
from sys.objects as obj
join sys.schemas as sch on sch.schema_id = obj.schema_id
where OBJECTPROPERTY(obj.object_id, 'IsTable') = 1 -- obj.type = 'U'
and = '<schema_name>'
and = '<table_name>'

You can use the OBJECTPROPERTY function to resolve other type of objects in the database.

Check if a foreign key between two tables with a specified name exists or not.

select fk.* from sys.foreign_keys fk
join sys.schemas as sch on sch.schema_id = fk.schema_id
join sys.objects as parentobj on parentobj.object_id = fk.parent_object_id and parentobj.schema_id = sch.schema_id
join sys.objects as refobj on refobj.object_id = fk.referenced_object_id and refobj.schema_id = sch.schema_id
where = '<schema_name>'
and = '<foreign_key_name>'
and = '<foreign_key_table_name>'
and = '<referenced_table_name>'

Check if an index with a specified name exists on a table or not.

select *
from sys.indexes as idx
join sys.objects as obj on obj.object_id = idx.object_id
join sys.schemas as sch on sch.schema_id = obj.schema_id
where = '<index_name>'
and = '<table_name>'
and = '<schema_name>'
-- and idx.is_primary_key = 1
-- and idx.is_unique_constraint = 1



August 9, 2011

SQL Server Version and Build Number

Filed under: SQL Server — Tags: , , , — Vikram @ 15:59

I try to link various SQL Server versions (RTM, Service Packs, Cumulative Updates) with respective build numbers. Here are a few links that I use which might come in handy for many of you database developers out there. Some of these links are official and some aren’t.

  1. How to determine the version and edition of SQL server and its components
  2. SQL Server Version
  3. SQL Server Version Database

Some of the above links are comprehensive enough for most developers.

Some useful SQL Server functions to determine the version and edition:


SQL Server identities: identity seed, increment and current value

Filed under: SQL Server — Tags: , , , , — Vikram @ 12:39

I have been trying to get the current seed value for an identity column using SQL server object metadata. The only luck I have had so far is to get the seed value and the increment value that was used when a table with the identity was created. I thought the queries would come in handy at some time or the other in future. So, here it is.


Another variation of the above query also retrieves the name of the column with the identity specified.

	col.seed_value as IDENT_SEED_VALUE,
	col.increment_value as IDENT_INCR_VALUE,
	col.last_value as IDENT_CURRENT_VALUE
from sys.schemas as sch
join sys.objects as obj on obj.schema_id = sch.schema_id
join sys.identity_columns as col on col.object_id = obj.object_id
where obj.type = 'U' and obj.is_ms_shipped = 0
order by,

The SQL function IDENT_CURRENT() returns the current value or the last identity value generated for the table.

However, note that the SQL functions IDENT_SEED() and IDENT_INCR() only return the values that were used during the creation of the table. If you have reseeded the table identity since its creation, then the functions would still show the original values.

I would like to know if anyone has found a way to get the last seed value specified for any given table when it was reseeded after the table was created.

April 9, 2010

DateTime Manipulation: Continued

Filed under: SQL Server — Tags: , , , — Vikram @ 16:55

In my previous post DateTime Manipulation: Date Part or Time Part I had mentioned about how we can get the date part and time part from a given DateTime. A couple of days back I came across an article in where another solution has been provided to get the date and time part out a given datetime using the DATEADD and DATEDIFF functions

Here is the link to the article Date Manipulation with DATEADD/DATEDIFF

Note: You may need to be logged into to view this article

October 27, 2009

DateTime Manipulation: Date Part or Time Part

Filed under: SQL Server — Vikram @ 10:53

There are times when using SQL server 2000 or 2005 , you really want to use either the date-part or the time-part of a datetime value. But, you would still want to use it as a datetime value for comparison. Usually, as a solution, you would try and build a new datetime by converting the current datetime into a string or extracting the day, month, year etc. from the datetime using the respective SQL functions.

The SQL server’s default language settings may sometimes poses problems when building the new datetime using various parts of the datetime.

However, I found the following solution from here. It is a clean solution to convert the datetime to midnight or keep the time-part but change the date-part to 1900-01-01.

Here is the code:

March 21, 2007

Partitioned views: CHECK constraints with NOT FOR REPLICATION clause

Filed under: SQL Server — Vikram @ 17:08

I am posting this entry to remind myself about what happens when you have the NOT FOR REPLICATION clause with a check constraint on a column in a table.

I was going through how data partitioning can be achieved in SQL server 2000/2005. As mentioned in the one the various articles on the internet, the simple way is to create identical tables with the same primary key columns and have a view on top that includes each of the tables with a UNION clause. However that is not the scope of this post.

The scope of this post is the impact on queries on partitioned views by having a check constraint in the underlying partitioned tables and that the check constraints are created with a NOT FOR REPLICATION clause. I have a standalone database without any replication involved. I would do the following for example:

Create two identical tables:

create table [dbo].[Table_1]
                CHECK NOT FOR REPLICATION ([TableID] BETWEEN 1 AND 999),  
 [AnyOtherColumn] int  NOT NULL ,  
) ON [Primary]

create table [dbo].[Table_2]
                CHECK NOT FOR REPLICATION ([TableID] BETWEEN 1000 AND 1999),  
 [AnyOtherColumn] int  NOT NULL ,
) ON [Primary]

Note the NOT FOR REPLICATION clause in the check constraints on the TableID column.

Creating a view on top of that

create view TableView
 select * from Table_1
 union all
 select * from Table_2

Now use the following query to see the query execution plan:

select * from TableView where TableID = 10

In SQL server 2000 the query execution would look like this:

In SQL server 2005 the execution plan would look like this:

In both cases you would notice that there is an index scan on both the tables that are part of the view. What would happen if there was no primary key? It would have done a table scan on all the tables. In this example there are only two tables. What would be the performance hit if there are many more tables with huge amount of data in it? The last thing you would want is a table scan on all tables to retrieve just one row.

Now drop the tables and the view and create them again, now without the NOT FOR REPLICATION clause in the check constraint.

create table [dbo].[Table_1]
                CHECK ([TableID] BETWEEN 1 AND 999),  
 [AnyOtherColumn] int  NOT NULL ,  
) ON [Primary]

create table [dbo].[Table_2]
                CHECK ([TableID] BETWEEN 1000 AND 1999),  
 [AnyOtherColumn] int  NOT NULL ,
) ON [Primary]

create view TableView
 select * from Table_1
 union all
 select * from Table_2

Now see the execution plan for the same query

select * from TableView where TableID = 10

This is what you see in SQL server 2000:

This is the execution plan on SQL server 2005

The difference this time is that it has done an index seek only on the partition where the data would be present. Since the check constraint on Table_1 allows values for TableID column only in the range of 1 and 999, it picks up data only from that partition table and does not bother looking for it in the other tables in the view.

So, why does the NOT FOR REPLICATION clause on the check constraint reduce the performance of the query so drastically?

Do not use the NOT FOR REPLICATION clause unless you need it.

Now, to add to the fun create the tables again, now with the primary key as clustered and without NOT FOR REPLICATION clause. Run the query execution plan for the same query again.

On SQL server 2000

On SQL server 2005

On both SQL server versions there is a clustered index scan just on the partition that is in the scope of the query.

If any one of you figures out why the NOT FOR REPLICATION on the check constraint makes such a huge query degradation, I would like to know why.

I have opened a thread on the MSDN forum for the same problem. You can find it here.

March 8, 2007

Cumulative hotfix for SQL Server 2005 SP2 is now available

Filed under: SQL Server — Vikram @ 23:15

Cumulative hotfix package (build 3152) for SQL Server 2005 Service Pack 2 is available.

 For more information about the bug fix list, follow the link.

Not long since SP2 was released.

February 19, 2007

Microsoft releases SQL Server 2005 Service Pack 2

Filed under: SQL Server — Vikram @ 17:16

It is here and ready for download. Microsoft has officially released SQL Server™ 2005 Service Pack 2. It can be downloaded from here.

Books Online available here


SQL Server 2005 SP2 will be available soon

Filed under: SQL Server — Vikram @ 14:35

Microsoft seems to be almost there with SQL Server™ 2005 Service Pack 2. The RTM should be available soon, real soon. Although they have not set a date yet, it should be released very soon.

For more information visit the following links:

To know what is new in the Service Pack 2, click here.


I came accross this link that claims its going to be released today (19th February 2007). So, I believe it should be out anytime now.

February 10, 2007

SQL Server 2005 SP2…… When???

Filed under: SQL Server — Vikram @ 20:45

It has been almost a year since SQL Server 2005 Service Pack 1 (SP1) was released. And now many of us are waiting for the release of Service Pack 2 (SP2).

We found an issue with SQL Server 2005 SP1 last week. It started with the following error in the application which was running seamlessly for every other operation except for one operation, when it was running on SQL server 2005 SP1.

Fatal error 3624 occurred at Feb 5 2007  2:34PM. Note the error and time, and contact your system administrator.

A severe error occurred on the current command.  The results, if any, should be discarded.

Location:  IndexRowScanner.cpp:370
Expression:  m_sizeOfBuffer > currentOffset + colLen
SPID:   54
Process ID:  2712

In the beginning we thought it may be something in our application. But when I checked the process ID in the task manager it turned out to be SQL server 2005 process. The SPID was the application server’s connection to the database.

Immediately we turned to the wonderful SQL Profiler, and this is what we saw:

SQL Server Assertion: File: <IndexRowScanner.cpp>, line=370 Failed Assertion = ‘m_sizeOfBuffer > currentOffset + colLen’. This error may be timing-related. If the error persists after rerunning the statement, use DBCC CHECKDB to check
the database for structural integrity, or restart the server to ensure in-memory data structures are not corrupted.

Warning: Fatal error 3624 occurred at Feb 5 2007  3:00PM. Note the error and time, and contact your system administrator.

2007-02-05 15:00:23.04 spid54      Error: 3624, Severity: 20, State: 1.
2007-02-05 15:00:23.04 spid54      A system assertion check has failed. Check the SQL Server error log for details

We investigated further to find out the point of failure. We found that there were certain SQL statements that were being executed against the database because of which SQL server was throwing the assertion error. The common things between these statements were the table being used in the statements and the fields. To be more specific it was to do with all the bit fields in the table. Some of the statements were updating the bit fields, and in some statements the bit fields were part of the where clause. Even a simple statement like SELECT MY_BIT_FIELD FROM MY_TABLE was failing with the same assertion error.

We then went into the MSDN forums and learnt that there were issues with bit fields that have been fixed as part of the SP2. So, I installed the SQL server 2005 SP2 Community Technology Preview (CTP). And to our luck our problem was resolved.

Now the question was, when will SQL server 2005 SP2 RTM be available? I learnt from Jim that it is expected in the first quarter of this year. This means it should be available anytime in the next three months. I hope Microsoft will release it as scheduled. For more information about the same Jim pointed me to SQL Server Manageability Team Blog.

Hope this helps you.

November 11, 2005

Backup Log Failing: SQL Server 2000

Filed under: SQL Server — Vikram @ 17:04

We are in the process of upgrading a database schema from one version to a later version. In order to do that we had to take a backup of the database that was under Transactional Replication and restore it on a different system to do the upgrade. We did the backup and the restore without any problems. The next step was to modify the schema and data in the database to bring it up to the new version. When we started executing our upgrade scripts we realized that the transaction log was growing rapidly and was eating up a large amount of disc space. At this point anyone would say that we should truncate the log and we are alright. I did the same. I issued the following command so as to truncate the log and make some space.


When I executed the command SQL Server came up with the message

The log was not truncated because records at the beginning of the log are pending replication. Ensure the Log Reader Agent is running or use sp_repldone to mark transactions as distributed.

This started a wave of questions in my mind. Why is the database still behaving as if it was under replication? Is there any way to truncate the log file? Does the BOL (Books Online) have any clues? After much efforts to find a solution, I ended up posting it on the Microsoft SQL Server Replication Newsgroup.

As a reply to my query, I was asked to issue the following statement before I truncated the log

EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1

But, when I executed this statement SQL Server came up with another error:

Server: Msg 18757, Level 16, State 1, Procedure sp_repldone, Line 1
The database is not published.

I was finally asked to try one more thing. I was surprised to see that this solution did work. So here it is:

  1. Publish the database in Transactional Replication.
  2. Issue the following command:
    EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1
  3. Truncate the log using the following command:
  4. Drop the publication.

This was so simple, but I could not imagine doing that. Thanks to Paul Ibison. For more answers on replication visit

Thanks Paul.


October 7, 2005

Microsoft© SQL Server 2000 Replication Bug

Filed under: SQL Server — Vikram @ 13:30

About three weeks back I came across a problem with Microsoft© SQL Server 2000 replication. This problem was specifically found in the Transactional replication. The details of SQL Server 2000 Replication is out of scope in this post and can be found on the Microsoft website.

Since this problem turned out to be a bug in SQL Server 2000 (SQL2K), I thought it would be worthwhile to blog about it so that any of you readers may benefit out of it. Before starting off with with the details I assume that you readers have knowledge of SQL2K Transactional replication.

About three weeks back I was given the task to set up replication for one of our clients, since the existing set up had broken down due to a server crash. It was the subscriber server that had crashed. We had tried to build up replication as normal, but it seemed to failing during the Snapshot initialisation of data at the subscriber. This was because, the system kept running out of disc space when the snapshot files for a particular Publication were being copied on to the disk as the publisher database was way too large for the snapshot (initialisation) process. As a result, we had to come up with another solution. Suggestions were made to have additional disks or to create separate Publications for the huge tables and so one. But one of my colleagues (who does not have a web presence yet) suggested the option using a back up of the database, restoring it on to the subscriber and setting up the replication by allowing the Subscription NOT to initialize the data on the subscriber. So, out of the three it was decided to use the third option, the backup. I was given the task to create scripts for the same.

So, I started on with a test dataset to make life easier. Before jumping into the task, I thought I would do some Google search on the same and to my luck I found this article from Microsoft© that said “HOW TO: Manually Synchronize Replication Subscriptions by Using Backup or Restore”. I went through this article and realized this was exactly what I wanted to do. To add to that I did not know about the scripts that needed to be generated manually on the Publisher database and applied on to the Subscriber database. So this article was very helpful in the sense that I may have run into other problems if I had missed this article. For example I did not know that we had to use the backup of a “Published” database and not just any backup, or the creation of scripts etc..

So, I started off by following exactly what the article said.

I Published my database first. Then I took a full back up of the published database and restored it on the subscriber server. Then I created my Subscriptions where I selected not to deliver the data and schema. I then changed the Distribution Agents schedule to run once. I used option of Queued Updating Subscriber.

Once this was done, I ran sp_scriptpublicationcustomprocs for each of my publications on the Publisher database and executed the generated scripts on to the Subscriber database. Since I had a Queued Updating subscriber, I also ran sp_script_synctran_commands for each of my publication on the Publisher database and executed the generated code on to the Subscriber database successfully. I did run the Distribution Agents once (before applying the scripts) to create the table MSsubscription_agents. As the last step I changed the Distribution Agent properties to run continuously by adding -Continuos to the command line.

It went very smoothly and the transactional replication was set on our test machines using the backup of a Published database. As a test, I changed the data in a particular column in a table on the publisher. The result, it got replicated as expected. Since we had used Queued Updating subscriber, I had to test a data change on the Subscriber database to see whether it got queued by the Queue Reader and updated on the Publisher database. So, I changed the data in particular table on the Subscriber database. This time the result was not as expected. My Queue Reader failed with an error as follows:

Server: PublisherServerName, Database MyDatabaseName: ODBC Error:Could not find stored procedure ”upd.sp_MSsync_upd_MyArticleName_1”.

As the error suggests the Queue Reader was trying to execute the stored procedure sp_MSsync_upd_MyArticleName_1 on the Publisher database (which did existed in the database) with a wrong user “upd”. I opened every possible stored procedure and trigger that to part in that update statement but failed to understand why the Queue Reader was trying to execute the stored procedure in the context of a non existed user “upd”.

I then set up replication as normal, i.e. not taking the back up of the published database. Whilst pushing the subscriptions, I selected the option to initialise the data and schema on the subscriber. After setting up the replication, I did the same data modification tests and everything went on fine, without any errors with the Queue reader.

This particular problem was happening only when subscriptions were created with the option not to initialize data at the subscriber.

After much trial and analysis, we decided to put this problem up to Microsoft©. And after two weeks of continuous interaction with Microsoft support team and their extensive tests and debugging, they realised that the Queue Reader version was being set according to what they said to a PRE SP3 version and was the reason why the stored procedure was being executed in the context of the user “upd”. This version information is calculated using the data in the queue_id in the table MSsubscription_agents. The Pre SP3 value in the table for the column was “mssqlqueue” and Microsoft had changed it to “mssqlqueueev2” in their Service Pack 4 for SQL Server 2000. So, they finally accepted it as their bug and gave a work around until they fix it in their subsequent Service Pack if any.

The work around is as follows:

Once you have set up the replication using the backup and restore method described in the article you would need to do the following:

1. Stop the Queue Reader and the Distribution Agents

2. Execute the following piece of SQL on SQL Query Analyzer© on the Subscriber database:

sp_configure ‘allow update’,1
reconfigure with override
update dbo.MSsubscription_agents set queue_id = ‘mssqlqueuev2’ where queue_id = ‘mssqlqueue’
sp_configure ‘allow update’,0
reconfigure with override

3. Start the Queue Reader and the Distribution Agents.

Simple!! isn’t it? But it took two weeks for Microsoft© support team to come out with it.

At the end it was indeed a great exercise for me to go little more in depth into SQL Server 2000 Transactional replication.

More later!!!!

Create a free website or blog at