Back in 1954, Paul Fitts came up with a model of human movement which is better known as Fitts's Law.
The time required to rapidly move to a target area is a function of the distance to and the size of the target.
The edges of a computer monitor give an interesting twist to Fitts's Law in the sense that the four corners of the screen create 4 buttons of unlimited size (Just throw your mouse up there, no matter how far you move it, you will quickly have a lock on that one specific pixel).
Microsoft implemented this in Windows in for example the Close button and the Start button.
Equally, the top, bottom of the screen have unlimited height and the sides of the screen have unlimited width.
This all worked great until the invention of the multi-monitor setups. The default layout in a dual monitor setup is the "Side-by-side" layout.
This side-by-side layout is not very good as it removes all the "unlimited width" controls from one side of your main monitor and one side of your secondary monitor. In my case, I moved my secondary monitor on the left of the main monitor. If I have to hit controls which are on the left of the gui, there is no longer the "unlimited width" provided by the screen edge. It is no longer possible to quickly move my mouse in the bottom left corner and hit the start menu, clicking the controls on the left hand side of the main monitor and on the right hand side of the secondary monitor requires precision clicks.
My improvement on the "default layout" of side-by-side monitors is arranging them in a staggered fashion. The effect of this arrangement is that transition to the alternative monitor only happens via one pixel. In the beginning it requires a bit of getting used to, but after a few days it feels like second nature. The result of this setup is the loss of only 1 endless pixel on the main monitor and 1 pixel on the secondary monitor (instead of 1200 pixels on each screens). The best staggered arrangement is by putting the secondary screen on the top left or bottom right of your main monitor. This retains the easy access to the close button and the start menu.
Setting this layout on Windows 7 is done by right clicking on the desktop and selecting "Screen Resolution". On that screen you can drag the screen pictogram with the 1 and drag it above the screen with nr 2. Click OK and try it out.
I understand why this is not the default layout as the way to transition to the second screen does not feel very natural at first, but Windows should at least give a hint so people can discover that it's possible.
SQL Server 2008 once again comes with a TSQL Debugger (which is really quite good). The only condition for using it is that the code is available in a non-encrypted form on the database. To assure this, you have 2 options:
You can check if your code is decrypted by typing sp_helptext 'p_myproc' where the code of your procedure should appear in the results window.
The first time you start debugging a TSQL statement, Management studio pops up a request to alter your firewall settings to allow debugging. Unfortunately this did not work at all in my case. So I had to dive in and make the required changes.
A number of Firewall settings have to be done to enable TSQL Debugging on your pc.
Start the Firewall tool:
In the firewall application a number of changes need to be made:
Right click on the Inbound rules node and select 'New Rule...'
Then select 'Port' and click Next
Now enter the TCP Port 135 as excluded and click Next
In the next dialog you make sure 'Allow the Connection' is selected and click Next
On next pane you specify which networks can access this port. For security reasons it is best to disable "Public" networks. Select the options as shown below and click next.
In the Naming dialog enter something like "TSQL Remote Debugging: Allow DCOM" and click finish:
Again right click on the "Inbound rules" node and select New Rule. This time we take the option "Program" and click next.
In the next dialog we will select the path where sqlservr.exe is installed (This depends on the name of your SQL Server instance. You can search for sqlservr.exe in the "C:\Program Files" folder). Click next when Done.
Again select the "Allow the connection" option and click Next. In the next Dialog once again disable "Public" networks. Click next and enter as name "TSQL Remote Debugging: sqlservr.exe"
Follow the same process as for "Exclude TCP Port 135" but enter the ports "500, 4500" in the "Protocols and ports" dialog
Proceed the same as in "Exclude TCP Port 135" and enter "TSQL Remote Debugging: IpSec" as name
The next application to exclude is management studio. Follow the same approach as "Exclude sqlservr.exe" but search for ssms.exe (C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\ssms.exe) . Proceed identical as "Exclude sqlservr.exe" but enter "TSQL Remote Debugging: ssms.exe" as name
This one is a bit special because it exists in a 32-bit and 64-bit flavor. For convenience we will exclude them both.
Proceed the same as for sqlservr.exe and ssms.exe and exclude the following 2 applications
And the second rule
This concludes the firewall configuration required to perform TSQL Debugging.
Open management studio and enter the SQL Statement you wish to debug and click on the "Start Debugging" option under the "Debug" menu. (or press Alt-F5)
This will start your Query and stop processing on the first statement it encounters. This you can see from the yellow arrow in the left margin.
To continue we have a number of new options in the debug menu
When you meet a piece of Dynamic SQL to be executed the debugger will open a new window where you can step through the generated SQL Code (which is super nice)
To set an extra breakpoint (ie: to press on with Alt-F5 to a certain line of code you click in the margin of the code window or press F9 which will display a red dot. If you then hit Alt-F5 the execution will stop at this red dot as shown with the yellow arrow:
Another convenient way to "Catch up" to the current line is by right clicking and selecting the "Run to cursor" option. This will advance the debugger to your current line.
To help you debug your code there are a number of help windows available.
The first one is the "Locals" window, this window displays the content of the parameters and variables currently in the scope of the code.
The next dialog is the Quick Watch window. If you have a "complex" expression for which you want to see the results of an extract you can do this with a quick watch. Select the variables to evaluate and right click. Select the quick watch option which will open the following window where you can enter variables, parameters, and the system functions that have names that start with @@.
The TSQL debugger does not provide an "out of the box" way to view the content of "local" temp tables. For this I built a stored procedure which you can run from any thread. The code is available on https://github.com/FilipDeVos/sp_select.
Something you should not forget is that the output of your "Original" query is still being printed to the results window of the window where you started debugging. An easy layout to work with the debugger is by making a "Vertical Split" which enables you to see the output together with the debugged statements
All kinds of people work in a company, you have Frank who likes to continuously curse at his computer and there is Tom who quietly works all day. We all know somebody like Andy, who will stand up and defend his opinion tooth and nail. And there is Sophie, who is so shy that she never dares voice concern. Finally, are all the people with a character falling in between the extremes.
The good thing is, no matter what kind of people you have in your team, an easy way to build a trust relationship while offering a forum to express thoughts and worries is organizing regular One-on-One meetings.
One on One - One2One - 1to1
Many team leads have these meetings informal, over a beer or at the coffee machine, but in my experience, the best way to ensure they happen consistently and to show that they are taken seriously is to have a formal meeting. There is nothing which communicates more "I take your opinion seriously" than a regular timeslot where all you will do is listen and be available to solve someone's issues.
An important part of the One on One meeting is explaining the purpose.
My One-on-One meetings are organized on a monthly basis. I book a recurring 1 hour slot during the first week of the month. Potentially they can be arranged more frequently, but in a sizable team this could quickly eat away a large chunk of your time and restricting the meetings to 30 minutes or less makes them too short to really get to the difficult topics. Monthly works best for me.
I will reserve a private office somewhere, and I'll sit next to the person so we can agree on what I am typing as meeting notes (Disconnect network & smart phones. I consider it extremely disrespectful to interrupt the conversation).
At the end of the meeting we quickly review the notes and I will send them to the person with my line manager in CC.
I break the meeting down in 4 parts.
This part of the meeting has a double purpose, it puts your team member at ease; he can talk about stuff he knows; and you get the chance to validate your project progress information. Let the person speak and ask specific questions to get them to talk. For people who didn't have the best month you will have to stress that it's not about blaming or attacking them, it's just to make sure that you have the right information and to make sure that they spend the time to check if they are on track or not.
(Sometimes it is not obvious to somebody that he is behind on his work.)
Now we are reaching the meat of the meeting. It is the point where you get to see if they have enough information on the upcoming projects. (Sometimes it get painful when I didn't do the best job in letting people know what is upcoming, but still very useful). It also lines up goals for the short term, which give a sense of purpose for the work to be achieved.
This one I only added recently. Some people were complaining that they were taken by surprise at their yearly performance evaluation. "I didn't know that was an objective!", "You never gave me time to do this!". Going over the list of objectives on a monthly basis gives people advanced warning and allows them to ask me additional questions on how to achieve these objectives. It's also handy to plan in extra time in their planning.
I have a page in OneNote with the objectives listed as a check box list and we can simply tick off the things which have been achieved. This is very satisfying for both of us.
Here I push people to tell what's on their mind. Very often it is easy since you already talked about a lot of things and you have gotten the time to re-establish the trust needed to hear the complaints and critics. Most of what I do here is shut up and take notes. It is useful to stop people from descending into the "complaint-mode" by asking "What could I do for you?" or "How could we have prevented this."
Initially about half of the team members were reluctant to go to these meetings, or tried to push them back, but after 3-4 sessions everybody is happy they happen and they even insist on the meetings happening on time. My impression is that they feel better since the meetings were established, and for me it gives a wealth of information on how they are doing, how they feel in their job and if I can do something for them to make things better. If doing these meetings prevents only one person from searching for another job, it is time well spent.
If you have never tried doing One-on-One meetings, give it a try for a few months, the initial investment is small, and the results are potentially big.
The TSQL Debugger in SQL Server has become more mature over time (after a little dip in SQL Server 2005) and it is now quite easy to debug stored procedures and functions. The one feature still missing though is the lack of a view on the content of temp tables while stepping through the code.
Some older ways of viewing the content is by peppering the code with select statements, or by replacing the temp table by a permanent table.
I have long dreamed of a way to query the content of a temp table from another connection. This way I can step through a Stored procedure, and check the content of the table while the procedure is waiting at a breakpoint. Since I never managed to find a proper solution I decided to build something on my own.
The result of this work is the “sp_select” project on github (https://github.com/FilipDeVos/sp_select).
This project contains 2 stored procedures:
They will use the output of the “DBCC PAGE” command to build up the content of a temp table.
The procedures can be used as follows. First open a new Query window and run the following statement without closing the query window:
Open a second query window and run the following statement to see the content of the temp table:
This query will return the following result set:
How does it work
The procedure sp_select will try to pinpoint the object_id of the table you are trying to get the data from. When specifying a permanent table this is quite easy the function object_id() will return the correct value. When the target is a temp table this is quite difficult as SQL Server does not store a link between the temp table in tempdb and the session in an easily accessible way.
There are 3 scenarios implemented in the procedures.
Once the object_id is determined the procedure sp_selectpages will be used to return the content.
There are a few outstanding issues to fix, but they will only extend the use of this procedure.
Every SQL Server developer and DBA has to use the sp_helpindex from time to time to get some information on the indexes on a table.
I have 2 major problems with this stored procedure:
If I run the following statement:
The following information will be returned:
|pk_entity||clustered, unique, primary key located on PRIMARY||entity|
To improve the way to work with this information I built the following function
If I run:
The following information will be returned:
If we use the function on a table with a partitioned index (like the script below) the function will return more information about the data space as well.
So, let's run the following statement on the partitioned table:
The output will be:
When you create a database in SQL Server using the "CREATE DATABASE" statement without any options, the database files will be created in some "default" locations.
Many people think this default location is the same as the location of the master database, but this is incorrect, it is possible to alter the location in server properties after installation of the server. (in Management Studio right click on the server and select "Server Properties", then go to the "Database Settings" tab and near the bottom you will see a section with "Database Default locations")
For example, suppose I execute the following statement:
On my machine this means 2 files will be created:
This is nice, you don't have to remember a path to pass, but when you want to restore a database from another server (where a different path is used) it can be a pain.
To restore a database with a specific location for the files we use the following syntax:
To make it easy to restore a database using a script I designed a function to figure out the location for the default Data path (and another one for the default log path).
The locations of these paths I got from http://msdn.microsoft.com/en-us/library/ms143547.aspx/
The output can be stored in a 260 character variable (260 as specified MAX_PATH in http://msdn.microsoft.com/en-us/library/aa365247.aspx).
Edit: Rudi Larno posted a very nice refactored version of the code I posted. (especially note the nice recursion in it.)
The functions can then be used as follows (or you can just copy paste the output):
In SQL Server it is possible to create temporary tables which are special tables that are automatically cleaned up when they go out of scope. There are two types of temporary tables, global and local temporary tables. Global Temporary Tables are visible in all sessions currently connected to the SQL Server instance and only go out of scope when the last session disconnects from the server. It is easy to recognise global temporary tables because their name starts with 2 #-signs. (for example "##customers") Local Temporary Tables are visible only in the current session and go out of scope when this session ends. Their name always starts with a single #-sign. (for example "#customer") Traditionally people create temporary tables the same way as they create normal tables
This seems like a perfectly reasonable way to create a temp table. Unfortunately SQL Server has one big flaw.
Presume we have the following situation. My SQL Server is installed with a Latin1_General_Bin collation and we create a database with a different collation.
Then we create a table in this database with some details about Rock Stars and a temp table with additional info.
This looks like a perfectly valid situation, but when we run the following query the problems start:
When we inspect the metadata closely we find the following:
For some reason SQL Server will create temporary tables using the collation of tempdb instead of inheriting the collation of the current db in the connection context.
But all is not lost. If we simply use a different approach to creating our temporary table we can solve all collation problems without adding collate statements to our code. Besides using the create table statement it is also possible to use SELECT INTO to create a table.
|1||Mick Jagger||USA||Mick Jagger||Alive and kicking|
|2||Jim Morrison||Heaven or Hell||Jim Morrison||Passed away|
This time no errors were raised and if we check the metadata in tempdb we can see that the collation is the same as the collation in the source db.
The End... or not completely...
When all you use are the base system datatypes the problems are solved, unfortunately when you want to use alias data types the syntax I put in the example does not work since you can only use CAST() with system data types.
Fortunately a (bit more convulated) solution exists for this as well.
|1||Jeff Buckley||Heaven||Jeff Buckley||Drowned sadly|
|2||Neil Young||USA||Neil Young||Keeps on rocking|
As you can see, it is not possible to use the alias data type directly in the SELECT INTO statement, but it is possible to declare a variable with the datatype and then use the variable in the SELECT INTO statement.
And again, when we check the table layout in tempdb, we can see the collation has been inherited.
When you start supporting the internationalization features of SQL Server, many of the development practices you have learned and applied for years become invalid. And it is important to test your software in "non standard" installations like databases deployed with different collations.
A client came to me with another string formatting problem. I would never recommend formatting strings in T-SQL, but rather to do formatting on the client side. The question was to strip trailing zeros from a numeric value stored in a string but retain the decimals when they are not zero.
|1.0501||needs to show||1.0501|
|1.0500||needs to show||1.05|
|1.0000||needs to show||1|
This is easy to do by replacing all zeroes with a space and then using RTRIM() to remove the spaces on the right. Finally replace the spaces again with zeroes and you have the correct result. Except when all decimals were zero, in that case the decimal point needs to be suppressed as well. I poured the code in a function that is easier to use.
Note: The function contains some addition logic to determine if the decimal separator is a point or a comma.
When developing an application that does a lot of inserts and updates, it is sometimes easier to wrap both statements in one stored procedure. This results in a simple procedure that will check if a certain record exists. When no record is found a new one is inserted, when a record is found the values are updated.
We will use the following table script as example to explain the problem:
When making an insert/update procedure most people write something like this:
This looks like the perfect solution to our problem except for one detail. When this procedure is executed from several threads at the same time to update the same record, it will crash.
Let's alter our procedure script a bit to make it easier to reproduce the problem.
Compile this script and then run the following statements (at the same time) in 2 query windows.
Surprise surprise, look at the error we get in the second query window...
Why does this happen? Simple, the first execution checks if the record exists and decides to insert a new record because the if exists returns false. The second thread is executing the same query at the same time and of course the existance check returns false as well. They both try to insert and a primary key error happens.
Some people think that this will never happen, but they are wrong, from the moment you are working with large amounts of data and users, this conflict will happen very often.
Luckily there is an easy way to solve this (very annoying) problem by using Lock hints in our advantage.
We will update our test procedure to include a transaction and an Exclusive Lock:
The updated code has as effect that the second thread will wait until the first thread has completed it's operation. It is no longer possible that another operation on the same record is "sneaking" in between the check and the insert or update.
Since I simplified the procedure script for the example you can see a complete version of the code below.
Addendum in 2011:What could be the consequence of this script? Deadlocks Since SQL Server will do "Lock escalation" it is possible to encounter deadlocks with this pattern. It is up to you to decide which error you prefer catching. The procedure below shows an approach to catch the duplicate key errors in TSQL and retry.