SQL Server Trigger: Updating another table using Inner Joins

I had a situation recently where we were changing an internal process that meant we would no longer be updating the data in a specific table because those column where moving to another table.  Unfortunately there was going to be some crossover and a temporary trigger may be necessary to continue to populate the previous table.  So below we are using an Update statement with Inner Joins.  Another cool feature I had forgotten about is the Inserted and Deleted tables.  We use the Inserted table to make sure we are only updating the rows affected by the Update or Insert statement that initiated the trigger. 

CREATE TRIGGER My_Trigger
ON Table_1
FOR INSERT, UPDATE
AS
   UPDATE Table_2
   SET Table_2.Column_1 = Table_1.Column_1,
       Table_2.Column_2 = Table_1.Column_2
   FROM Table_2
   INNER JOIN inserted
   ON Table_2.PK_1 = Inserted.PK_1
   AND Table_2.PK_2 = Inserted.PK_2
   INNER JOIN Table_1
   ON Table_2.PK_1 = Table_1.PK_1
   AND Table_2.PK_2 = Table_1.PK_2
GO

SQL Server 2008 migration complete!

Well, it was a smashing success!  We moved our SQL Server 2000 servers for our Point of Sale system to SQL Server 2008.  Everything went very well and everybody seemed impressed.  We had way less problems setting up the store subscriptions than last time, which was very good. 
 

The Big Day Is Almost Here

Next weekend is going to be big.  We upgrade two SQL Server 2000 Clusters to SQL Server 2008.  Things are going well, but man am I getting nervous.  It’s pretty exciting to be at the center of such a big project, but it also means a lot of responsibility.
 
We’ve got a backend database that’s about 150GB that has 5 transactional replication publications.  Three to Oracle (much fun with that) and two to a republisher.  The republisher has about 450 subscribers currently spanned across 10 publications, 2 per organization.  Right now our republisher is a 2 processor x86 box with 8GB RAM.  When all the subscribers are connecting we hit a CPU bottleneck.  When sending somewhat large batches, about 120,000 rows, to the republisher we also experienced such intense CPU strain the number of connections had to be reduced to get the batches to complete in 3 to 4 hours.
 
To help relieve the strain we’re replacing the republishers with 2 4-core processors (X5460 baby!) and 32GB RAM.  We should see much better performance processing large batches because SQL Server 2008 breaks the batches down to clear the locks while processing.  We are also balancing the subscribers across more publications, 4 per organization, for a total of 20. 
 
The biggest hurdle may be initializing all those subscribers over some wickedly crappy VPN connections.  To combat the network issues we are compressing the snapshots and upon upgrading the subscribers to SQL Server 2008 Express we will use .NET and RMO to copy the snapshot locally before applying it.
 
I’ll post after the big day and let you know how it goes.
 

SQL Server, RMO, and slow WAN links

If you’ve ever had to replicate data over a slow link you know it can be quite a pain.  The environment in which I work has several hundreds subscrivers pulling data over various DSL circuits and 3G wireless circuits.  Sometimes connectivity can be spotty.  During a reinitialization we often end up with multiple restarts and an unusable database for long periods of time.  Obviously this doesn’t please the business side of the house.
 
Part of our strategy to relieve this problem is to break our two publications into 4, remove some unneeded tables from replication, and trim out old or unused data.  Still we had snapshot sizes around 50MB for some tables.  In our current environment, SQL Server 2000, compression was not enabled and subscribers point to the default snapshot location.  Microsoft SQL Server engineers (one in particular, who I might say is awesome) provided us with a great solution. 
 
We will be upgrading both our backend SQL instances to 2008 Enterprise and our store level SQL instances to 2008 Express.  With SQL Server 2008 Express we can use RMO to control our replication syncronization and custoize how it happens.  To ensure that the store (or "local database") doesn’t get wiped over and over as re-initialization loops we use a .NET application to execute Robocopy and move the snapshot files from the server to the store’s local hard drive.  Once the copy has completed with no errors we re-initialize the subscription andthen start the syncronization.
 
If you are using SQL Server replication over slow links this could be a great solution for you.  Please let me know if you’d like more information or sample code.
 
BTW, I haven’t forgotten about the HTA information, but I’ve been wicked busy!

SQL Server Replication to Oracle

Wow, it’s been 4 months since I posted.  It’s been crazy though, with kids back at school, wife having knee surgery, and tons of work to do in the office.  As a matter of fact the only reason I have time to write this is because my connection to my PC at work got lost due to some network issues.  Wroking from home can have som advantages! ; )
 
So I thought I’d write a little about what I’ve been working on.  We are upgrading the back end of our POS system from SQL Server 2000 x86 (YIKES!) to SQL Server 2008 x64 (YIKES!).  Since we send data to Oracle from this system it is going to require several changes.  If you didn’t know Replication Publications are no longer transformable.  This was a great feature for us because our app uses a column with a name that is over 30 characters long.  This is a no-no in Oracle so we had a DTS packaged that matched the two different column names during the replication process.  The new solution is… Indexed Views!  Please hop on over to MSDN for the juicy details on Indexed View creation.
 
So here’s the basics.  You need to create a view of the table in question and then create a clustered index on the view.  So there are some special settings you want to keep in mind.  Make sure setting below are set correctly
 
SET NUMERIC_ROUNDABORT OFF;
SET ANSI_PADDING, ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL, ARITHABORT,
    QUOTED_IDENTIFIER, ANSI_NULLS ON;

 

Then create your view using WITH SCHEMABINDING.  This piece is extremely important and required for publishing.

CREATE VIEW [vTable1]
WITH SCHEMABINDING
AS
SELECT
   Store_Loc_ID,
   Cust_ID,
   Cust_Name,
   Cust_Column_With_A_Really_Long_Name as Cust_Column_Short
FROM
   [Table1]
 
Now we set up the index on the view
CREATE UNIQUE CLUSTERED INDEX IDX_vTable1
   ON vTable1 (Store_Loc_ID, Cust_ID)
 
Then on to the really fun part, creating the publication.  Walk through the process selecting your publishing database and type of replication.  When selecting the article you should see Indexed Views.  Select you view vTable1 then modify the article destination to be the Oracle table name then continue to the end, but choose to script the publication to a query editor window, not create it.  Go to the end and look for "exec sp_addarticle".
 

EXEC sp_addarticle @publication = N’Indexed_View_Test’, @article = N’vTable1′, @source_owner = N’dbo’, @source_object = N’vTable1′, @type = N’indexed view logbased’, @description = N”, @creation_script = N”, @pre_creation_cmd = N’none’, @schema_option = 0x0000000000000001, @destination_table = N’Table1′, @destination_owner = N’sa’, @status = 16

 
The @type variable will be set to ‘indexed view schema only’.  Change it to ‘indexed view logbased’ then run the script to create your publication. 
 
All that’s left to do is add your the subscriber and your ready to go!
 
I’m also working on a another few posts that I hope to have out soon covering HTA’s, SQL cursors, and using .NET CLR in SQL Server 2005/2008.
 
Hope everyone is having a happy holiday season!
 
 
 

SQL Health and History

I was recently asked if I could help find a way to monitor the file usage on our SQL servers to help us better predict our future storage needs and so that large jumps in storage usage can be reviewed.  So I set about looking around trying to find a solution.  Being the VBScript zealot that I am I started thinking about all the ways I could retreive the database info from servers, then get the file and pull it’s size attribute, and insert that data into a database.
 
Wow, what a whipping!
 
Then I stumbled accross SQLH2 while looking for something else.  Man I love this tool!  It creates a database in the location of your liking and sets up a scheduled job to poll each server using standard Windows tools like WMI and store that information in your SQLHRepository.  Using SQL Server Reporting you are able to few the reports and give others access to them as well.   Download it at:
 
 
And get the canned reports at:
 
 
Enjoy!