Justin Neff Rotating Header Image

Vacation Week (aka Daddy Week)

A few weeks ago I took a much needed (and long overdue) break from work. It really wasn’t a vacation but a stay cation. My wife wasn’t able to take off work, so most of the week was spent with just Cameron and me. Cameron, my four year old son, referred to it as Daddy Week.

Monday was the 4th of July so it was a holiday and everyone was already off. My brother-in-law grilled some hamburgers and hot dogs for lunch and then we lounged around in the pool for the rest of the afternoon. Later we went to my wife’s parent’s house (who live just outside the city) to shoot off some fireworks. The fireworks were almost an epic fail as we nearly set a field on fire when one of the fireworks fell over, went crazy and took on a life of its own. Luckily we didn’t do too much damage and got the fire put out, but that pretty much put an end to our fireworks display.

On Tuesdays, my mom takes Cameron to story time at the library so this week Cameron got to take me to story time. This week’s topic was Europe so they read stories, sang songs, and had a craft project themed with Europe. Cameron really doesn’t participate in the singing but he loves the crafts. The craft project for this week was making a pizza out of construction paper.

Wednesday featured us having lunch with my grandparents, mom, sister, and great aunt. Then Cameron wanted to go to my grandparent’s house (as he usually does on Wednesday afternoons) to play. So that’s what we did for a little while. After we got home, we climbed up in bed so I could read Cameron some of his library books. It wasn’t no time before Cameron was fast asleep and I wasn’t too far behind him for an afternoon nap.

Thursday and Friday were extremely lazy. We mostly stayed home, watched movies and played with whatever Cameron wanted to play with. Cameron even go to watch Star Wars for the first time. (I think he really liked it.)

We topped off the incredible week by going to Local Culture to get some frozen yogurt. I had promise Cameron early in the week that we would go get a treat on Friday so we met Lisa up there when she got off work.

I wasn’t ready for the vacation to be over, but at the same time I was ready to get back to work. The week off spent with Cameron was an excellent way to recharge my batteries and get me refocused back on work. I’m already looking forward to our next vacation in October when we are taking Cameron to Disney World.

Sharing Temp Tables Between Stored Procedures

A few months back I encountered a problem trying to share a temporary table between two stored procedures.  One way to do this is to create a temp table with the same name in both procedures.  I am not crazy about this approach, because it meant I had duplicate the code to create the temp table.  After brainstorming with another developer, we came up with the idea to create the the temp table with a “dummy” column and then use an alter table to add the columns we really need.

CREATE PROCEDURE myProcA
(
    @ReturnData BIT = 1
)
AS
BEGIN
    IF (OBJECT_ID('tempdb..#MyTable') IS NULL)
    BEGIN
        CREATE TABLE #MyTable
        (
            Temp NVARCHAR(1)
        )
    END
    
    ALTER TABLE #MyTable
        ADD ObjectID INT,
            ObjectName SYSNAME,
            ObjectType NVARCHAR(60),
            CreateDate DATETIME,
            ModifyDate DATETIME
    
    INSERT #MyTable (ObjectID, ObjectName, ObjectType, CreateDate, ModifyDate)
    SELECT [object_id], [name], type_desc, create_date, modify_date
    FROM sys.objects
    
    IF @ReturnData = 0
        RETURN
        
    SELECT * FROM #MyTable
END
GO

CREATE PROCEDURE myProcB
AS
BEGIN
    CREATE TABLE #MyTable
    (
        Foo NVARCHAR(1)
    )
    
    EXECUTE myProcA 
        @ReturnData = 0
        
    SELECT ObjectType, COUNT(ObjectID) ObjectCount
    FROM #MyTable
    GROUP BY ObjectType
    ORDER BY ObjectType
END
GO

The procedures I was working with was a lot more complicated, but this covers the basic idea.

At first glace it seems to work as the procedure myProcA by itself works as expected, but when I tried to execute myProcB, I was in for a surprise.  I got a host of invalid column name errors for every column added by the alter table.

Initially, I was baffled, but I have since found two solutions to this problem as well as explanation to why I got the invalid column name errors.

Solution 1

The first solution I found was to put the alter table in another stored procedure and call that procedure after creating my initial temp tables.  The updated code looks like this:

CREATE PROCEDURE myProcC
AS
BEGIN
    ALTER TABLE #MyTable
        ADD ObjectID INT,
            ObjectName SYSNAME,
            ObjectType NVARCHAR(60),
            CreateDate DATETIME,
            ModifyDate DATETIME
END
GO

ALTER PROCEDURE myProcA
(
    @ReturnData BIT = 1
)
AS
BEGIN
    IF (OBJECT_ID('tempdb..#MyTable') IS NULL)
    BEGIN
        CREATE TABLE #MyTable
        (
            Temp NVARCHAR(1)
        )
        
        EXECUTE myProcC
    END
    
    INSERT #MyTable (ObjectID, ObjectName, ObjectType, CreateDate, ModifyDate)
    SELECT [object_id], [name], type_desc, create_date, modify_date
    FROM sys.objects
    
    IF @ReturnData = 0
        RETURN
        
    SELECT * FROM #MyTable
END
GO

ALTER PROCEDURE myProcB
AS
BEGIN
    CREATE TABLE #MyTable
    (
        Foo NVARCHAR(1)
    )
    
    EXECUTE myProcC
    
    EXECUTE myProcA 
        @ReturnData = 0
        
    SELECT ObjectType, COUNT(ObjectID) ObjectCount
    FROM #MyTable
    GROUP BY ObjectType
    ORDER BY ObjectType
END
GO

This approach works beautifully and the both procedures execute successfully and return the results we were looking for.  It does add another stored procedure to maintain, but the meat of the temp table creation is all done in a single place.  One small thing with this approach is that you lose IntelliSense all the columns in the temp table added by the alter table.

Solution 2

Last month, I gave a presentation at NEADNUG about this problem and another possible solution came up.  So I decided what the heck and gave it try during my presentation.  I cannot take credit for this solution, because I did not come up with it on my own, but I think it is a much better approach than one I came up with originally.

The suggested solution was to take my original procedure and call the alter table as an execute character string.  The resulting change to the original procedures was:

ALTER PROCEDURE myProcA
(
    @ReturnData BIT = 1
)
AS
BEGIN
    IF (OBJECT_ID('tempdb..#MyTable') IS NULL)
    BEGIN
        CREATE TABLE #MyTable
        (
            Temp NVARCHAR(1)
        )
    END
    
    EXECUTE('ALTER TABLE #MyTable
        ADD ObjectID INT,
            ObjectName SYSNAME,
            ObjectType NVARCHAR(60),
            CreateDate DATETIME,
            ModifyDate DATETIME')
    
    INSERT #MyTable (ObjectID, ObjectName, ObjectType, CreateDate, ModifyDate)
    SELECT [object_id], [name], type_desc, create_date, modify_date
    FROM sys.objects
    
    IF @ReturnData = 0
        RETURN
        
    SELECT * FROM #MyTable
END
GO

ALTER PROCEDURE myProcB
AS
BEGIN
    CREATE TABLE #MyTable
    (
        Foo NVARCHAR(1)
    )
    
    EXECUTE myProcA 
        @ReturnData = 0
    
    SELECT ObjectType, COUNT(ObjectID) ObjectCount
    FROM #MyTable
    GROUP BY ObjectType
    ORDER BY ObjectType
END
GO

You still lose the IntelliSense with this, but what I really like about it is that it eliminates the extra stored procedure and keeps the alter table code right with the rest of the code that uses it with the procedure.

Why does this work?

The reason the original procedure does not work and these two solutions do comes down to batching. 

If the table (temporary or otherwise) is created within the same batch that as the alter table, you can reference the columns added by the alter table command.  This explains with in the original code executing myProcA by itself work fine.

Now if table is created in one batch and then the alter table is done in a different batch, you cannot reference the columns added by the alter table in the same batch.  This is why the original myProcB gives the errors about the invalid column name.

When executing a stored procedure, myProcC in this case, or using the execute character string to run dynamic SQL, the stored procedure or dynamic SQL is executed as its own batch.  Therefore, you can access the columns added by the stored procedure or dynamic SQL when it finishes running.  The dynamic SQL can be executed using the EXECUTE command or the sp_executesql command.  Both of them treat the batches the same way.

Either of the above approaches will work or maybe you will come up with another solution, but either way I hope you have found this post helpful.

Where Has Time Gone

Wow! What in the world has happened to the last few months?

Between work, family and other things coming up, I have barely had time to catch my breath much less write. I feel horrible about not getting to write so I am squeezing in a little time to write a short post while eating lunch.

So what’s going on?

As part of my continual learning, I have started exploring web development for the first time using MVC 3. Along with MVC 3, I am looking at Entity Framework 4.1. It has been an interesting learning experience that I will definitely be writing about soon.

This weekend the wife and I are planning to do some significant landscaping in the front yard and with the help of my dad we are going to try to fix a drainage problem in our backyard.

The summer bowling season starts next week, which I am super excited about. I have been off from bowling the last three weeks and I am definitely ready to get back at it.

In a couple of weeks, I am giving another presentation at NEADNUG. The topic is going to be on SQL Server and an interest challenge I came across recently working on a project. Should be interesting discussion.

That all for now, it’s back to the grindstone.

Identity Columns and Reseeding

The DBCC CHECKIDENT statement can be used to reseed or reset the value of an identity column.  It is a statement that should be used carefully and selectively.  I have used the function many times in the past, but recently I came across some strange (but documented) behavior.

To illustrate the strange behavior, we will first create a test table and stick some data in it and examine how the identity column acts normally.  Nothing fancy here we are going to create a table “TestTable” and put three records in it.

CREATE TABLE TestTable
(
    Id INT IDENTITY(1, 1),
    TestValue VARCHAR(15)
)
GO

INSERT TestTable (TestValue) VALUES ('Test 1')
GO

INSERT TestTable (TestValue) VALUES ('Test 2')
GO

INSERT TestTable (TestValue) VALUES ('Test 3')
GO

SELECT * FROM TestTable
GO

The result of the select statement is:

Id          TestValue
----------- ---------------
1           Test 1
2           Test 2
3           Test 3

We will use this query several times throughout this post to examine the value state of our identity column.

SELECT 
    ic.seed_value,
    ic.increment_value,
    ic.last_value,
    IDENT_CURRENT(t.name) AS current_identity
FROM 
    sys.identity_columns ic
INNER JOIN
    sys.tables t
ON
    ic.object_id = t.object_id
WHERE
    t.name = 'TestTable'

Running this query now yields the following results:

seed_value increment_value last_value current_identity
---------- --------------- ---------- ----------------
1          1               3          3

As expected the last_value is 3 and the current_identity is also 3.  If I truncate the table and re-run the query to check the state of the identity column, we will see the following results:

TRUNCATE TABLE TestTable
GO
seed_value increment_value last_value current_identity
---------- --------------- ---------- ----------------
1          1               NULL       1

The truncate table command resets the last_value to NULL and the current_identity to the seed_value.  Now if I insert a couple more records into our TestTable, what will we get.

INSERT TestTable (TestValue) VALUES ('Test 4')
GO

INSERT TestTable (TestValue) VALUES ('Test 5')
GO

SELECT * FROM TestTable
GO

The result of the select statement:

Id          TestValue
----------- ---------------
1           Test 4
2           Test 5

And the result of our query to check the state of the identity column:

seed_value increment_value last_value current_identity
---------- --------------- ---------- ----------------
1          1               2          2

There are cases where you cannot use the truncate table command like when the primary key of the table is referenced in a foreign key relationship.  In these cases, we can only delete from the table.  The delete command does not alter the values of the last_value and current_identity columns so the next record that gets inserted into the table would get the next available identity value of 3 in our case.

DELETE FROM TestTable
GO
seed_value increment_value last_value current_identity
---------- --------------- ---------- ----------------
1          1               2          2

This is where the DBCC CHECKIDENT statement comes in to play.  We can use the DBCC CHECKIDENT to reseed the current_identity value.  So let’s see what happens if we run this set of commands:

DELETE FROM TestTable
GO

DBCC CHECKIDENT('TestTable', RESEED, 1)
GO

INSERT TestTable (TestValue) VALUES ('Test 6')
GO

SELECT * FROM TestTable
GO

Here is the state of our identity column before inserting the record:

seed_value increment_value last_value current_identity
---------- --------------- ---------- ----------------
1          1               1          1

And the output of the select after the insert may be somewhat surprising.  The value of the Id column is 2.  When we called the CHECKIDENT statement we provided the reseed value of 1 so the expectation might have been for the next Id to be 1 as well.

Id          TestValue
----------- ---------------
2           Test 6

So it looks like to get the next record after the delete to have an Id of 1 we need to reseed with a new seed value of 0.  Doing that we get the desired results.  Here is the SQL statements, the state of our identity column before inserting the new record and the results of the select statement.

DELETE FROM TestTable
GO

DBCC CHECKIDENT('TestTable', RESEED, 0)
GO

INSERT TestTable (TestValue) VALUES ('Test 7')
GO

SELECT * FROM TestTable
GO
seed_value increment_value last_value current_identity
---------- --------------- ---------- ----------------
1          1               0          0
Id          TestValue
----------- ---------------
1           Test 7

It seems that when we use the DBCC CHECKIDENT statement to reseed the identity value of a table it adds 1 to the new seed value we specify to get the identity of the next record.  Is this always the case?

Well the answer is “It depends.”  Let’s see what happens if I truncate the table instead of deleting from our last set of commands.

TRUNCATE TABLE TestTable
GO

DBCC CHECKIDENT('TestTable', RESEED, 0)
GO

INSERT TestTable (TestValue) VALUES ('Test 7')
GO

SELECT * FROM TestTable
GO
seed_value increment_value last_value current_identity
---------- --------------- ---------- ----------------
1          1               NULL       0
Id          TestValue
----------- ---------------
0           Test 7

Now there is surprising twist.  The value of the Id column gets set to 0.  If the value of the last_value column in sys.identity_columns is null, then the value of the current_identity is used as the identity value for the next record inserted.  If the value of last_value is not null, then the increment_value is added to the current_identity value to get the identity value for the next record.

This is the documented behavior in Books Online.  I had to read it a couple of times before I noticed where this behavior is described.  Here is the explanation from remarks on the DBCC CHECKIDENT(table_name, RESEED, new_seed_value):

Current identity value is set to the new_reseed_value. If no rows have been inserted into the table since the table was created, or if all rows have been removed by using the TRUNCATE TABLE statement, the first row inserted after you run DBCC CHECKIDENT uses new_reseed_value as the identity. Otherwise, the next row inserted uses new_reseed_value + the current increment value.

Hopefully this explanation will help you out.

Expression Tree Presentation (Follow Up)

Made it through the presentation and I believe it came off pretty well.  We had a smallish crowd due to some rainy and stormy weather, but those attending seemed to pretty interested in exploring expression trees.  We took a look at what the expression tree would look like for several different expressions including unary and ternary operations.

General consensus seemed to be that we were not sure how we would use expression trees in our normal everyday programming but nonetheless they were still pretty cool to learn about.

Expression Tree Presentation

I am giving a presentation this week at the local .NET user group on Expression Trees or at least I am supposed to be. This topic has been a lot more difficult to present on than I thought when I volunteered to give the presentation last month.

I understand the concepts behind expression trees and what they are, but I have not been able to come up with an example of how they can be used to improve my everyday development. I guess if I was writing a LINQ provider I would find using them more practical.

Well the presentation is Thursday night and I am sure I will be ready to present when the time comes. Wish me luck.

Jonesboro Gets 3G (Finally)

Today is a monumental day for AT&T customers in Northeast Arkansas. Finally after years of rumors and waiting, AT&T has turned on 3G in the Jonesboro area.

The Edge data network that had been in place had gotten atrociously slow to the point where using my iPhone on Edge almost impossible. Verizon has had 3G in Jonesboro for at least the last 3 years, and I was seriously considering switching carriers.

The big holdup on switching carriers for me was the iPhone. A funny coincidence is that just about week ago the Verizon iPhone was announced. I guess AT&T turned on 3G just in time. Early impressions on 3G are great and I hope it will remain stable. So, AT&T will retain me as a customer for a while longer.

Now, if I could only get a new tower build closer to my house to improve reception. (I’ve heard Verizon’s signal in my neighborhood is not much better.)

Teaching Others

Teaching others can be challenging and requires a great deal of patience. Teaching is also something that not everyone is good at doing. The funny thing about teaching is that just because you are good at something it does not mean you will be good at teaching it to others.

I would consider myself to be a pretty good bowler averaging around 215 a game. I am by no means the best bowler in the world but I can hold my own. People ask me all the time how to throw strikes, but I cannot figure out how to explain it to them. I get up on the approach, instincts take over and I throw the ball down the lane. For me bowling is all about feel. How the ball feels in my hand is as much a factor as lane conditions when it comes to how well I’m going to bowl on any particular night.

When it comes to development, I feel much more confident when it comes to explaining things as long as I understand it myself. It doesn’t come without it’s challenges though. Some things are very difficult to explain like how I know which tables to join together when writing a SQL query. I know our database structure so well I hardly give it a thought, but when someone new is looking at it know where to start can be quite difficult.

Not everyone learns the same way so the trick to teaching people is figuring out how they learn best and using that to maximize what you teach to them.

I believe being a good teacher and being able to explain things well to others is a gift. It is a gift that not everyone possesses. I hope I can be a good enough teacher to share the knowledge I have with others in a way that is understandable.

2011: New Year, Fresh Start

When I started this blog last year, I wanted to keep the focus on software development and technology. Technology is a huge part of my life and I work in the field, but it is not the whole story of who I am.

I want to make this blog representative of me, so I am changing gears and making a slight course correction. I am still going to write about technology, but I am also going to write about whatever else is on my mind.

2011 is going to be a great year and I am excited to learn what it brings. I also look forward to sharing the year with everyone online. Hope you all have a great year too.

The Impact of Data Conversions on Implementation Success

The better part of my last two weeks has been spend working on a data conversion for a new client coming onto our software.  This particular conversion has been quite challenging and has required additional effort that most I have worked on in the past.  I get the impression from a lot of people that they believe data conversion is a simple and trivial task that is part of an implementation.

Let me tell you that data conversions are any thing but simple and trivial.  They are difficult and required much more effort than anyone ever acknowledges.  I have seen just how critically important it is to “get it right” when it comes to converting data.  The smoother the transition from the legacy system to the new system the more successful the implementation.  Nothing will derail an implementation faster or cause a client to lose confidence in the new system faster than having issues caused by a bad data conversion. 

I have seen first hand how positive a good data conversion can be but also how damaging a poor data conversion is.  A good conversion and smooth transition can bring even the most difficult customers onboard with a new system, but as soon as the customer’s confidence is shaken it takes twice as many good things to bring someone back on board.

Just some food for thought.