Stored Procedure optimiaztion Tips
SQL SERVER – Stored Procedure Optimization Tips – Best Practices
by Pinal Dave
We will go over how to optimize Stored Procedure with making simple changes in the code. Please note there are many more other tips, which we will cover in future articles.
- Include SET NOCOUNT ON statement: With every SELECT and DML statement, the SQL server returns a message that indicates the number of affected rows by that statement. This information is mostly helpful in debugging the code, but it is useless after that. By setting SET NOCOUNT ON, we can disable the feature of returning this extra information. For stored procedures that contain several statements or contain Transact-SQL loops, setting SET NOCOUNT to ON can provide a significant performance boost because network traffic is greatly reduced.
01.
CREATE
PROC
dbo.ProcName
02.
03.
AS
04.
05.
SET
NOCOUNT
ON
;
06.
07.
--Procedure code here
08.
09.
SELECT
column1
FROM
dbo.TblTable1
10.
11.
-- Reset SET NOCOUNT to OFF
12.
13.
SET
NOCOUNT
OFF
;
14.
15.
GO
- Use schema name with object name: The object name is qualified if used with schema name. Schema name should be used with the stored procedure name and with all objects referenced inside the stored procedure. This help in directly finding the complied plan instead of searching the objects in other possible schema before finally deciding to use a cached plan, if available. This process of searching and deciding a schema for an object leads to COMPILE lock on stored procedure and decreases the stored procedure’s performance. Therefore, always refer the objects with qualified name in the stored procedure like
01.
SELECT
*
FROM
dbo.MyTable
-- Preferred method
02.
03.
-- Instead of
04.
05.
SELECT
*
FROM
MyTable
-- Avoid this method
06.
07.
--And finally call the stored procedure with qualified name like:
08.
09.
EXEC
dbo.MyProc
-- Preferred method
10.
11.
--Instead of
12.
13.
EXEC
MyProc
-
-
Avoid this method
- Do not use the prefix “sp_” in the stored procedure name: If a stored procedure name begins with “SP_,” then SQL server first searches in the master database and then in the current session database. Searching in the master database causes extra overhead and even a wrong result if another stored procedure with the same name is found in master database.
- Use IF EXISTS (SELECT 1) instead of (SELECT *): To check the existence of a record in another table, we uses the IF EXISTS clause. The IF EXISTS clause returns True if any value is returned from an internal statement, either a single value “1” or all columns of a record or complete recordset. The output of the internal statement is not used. Hence, to minimize the data for processing and network transferring, we should use “1” in the SELECT clause of an internal statement, as shown below:
IF EXISTS (SELECT 1 FROM sysobjectsWHERE name = 'MyTable' AND type = 'U')
- Use the sp_executesql stored procedure instead of the EXECUTE statement.
The sp_executesql stored procedure supports parameters. So, using the sp_executesql stored procedure instead of the EXECUTE statement improve the re-usability of your code. The execution plan of a dynamic statement can be reused only if each and every character, including case, space, comments and parameter, is same for two statements. For example, if we execute the below batch:
01.
DECLARE
@Query
VARCHAR
(
100
)
02.
03.
DECLARE
@Age
INT
04.
05.
SET
@Age
=
25
06.
07.
SET
@Query
=
'SELECT * FROM dbo.tblPerson WHERE Age = '
+
CONVERT
(
VARCHAR
(
3
)
,
@Age
)
08.
09.
EXEC
(
@Query
)
If we again execute the above batch using different @Age value, then the execution plan for SELECT statement created for @Age =25 would not be reused. However, if we write the above batch as given below,
1.
DECLARE
@Query
NVARCHAR
(
100
)
2.
3.
SET
@Query
=
N
'SELECT * FROM dbo.tblPerson WHERE Age = @Age'
4.
5.
EXECUTE
sp_executesql
@Query
, N
'@Age int'
,
@Age
=
25
the compiled plan of this SELECT statement will be reused for different value of @Age parameter. The reuse of the existing complied plan will result in improved performance.
- Try to avoid using SQL Server cursors whenever possible: Cursor uses a lot of resources for overhead processing to maintain current record position in a recordset and this decreases the performance. If we need to process records one-by-one in a loop, then we should use the WHILE clause. Wherever possible, we should replace the cursor-based approach with SET-based approach. Because the SQL Server engine is designed and optimized to perform SET-based operation very fast. Again, please note cursor is also a kind of WHILE Loop.
- Keep the Transaction as short as possible: The length of transaction affects blocking and deadlocking. Exclusive lock is not released until the end of transaction. In higher isolation level, the shared locks are also aged with transaction. Therefore, lengthy transaction means locks for longer time and locks for longer time turns into blocking. In some cases, blocking also converts into deadlocks. So, for faster execution and less blocking, the transaction should be kept as short as possible.
- Use TRY-Catch for error handling: Prior to SQL server 2005 version code for error handling, there was a big portion of actual code because an error check statement was written after every t-sql statement. More code always consumes more resources and time. In SQL Server 2005, a new simple way is introduced for the same purpose. The syntax is as follows:
01.
BEGIN
TRY
02.
03.
--Your t-sql code goes here
04.
05.
END
TRY
06.
07.
BEGIN
CATCH
08.
09.
--Your error handling code goes here
10.
11.
END
CATCH
Hi thanks for optimization details
ReplyDeletethk u
Delete