Novick Software
SQL Server Consulting Design Programming Tuning

  andy novick is a sql server mvp

 

 

Minimizing Stored Procedure Recompilation Problems in SQL Server 2000

This is page 1 of 3:  Next Page (2)

This article discusses the reasons that SQL Server 2000 decides to recompile a stored procedure and demonstrates techniques that can be used to minimize recompilation. If your system makes extensive use of stored procedures, minimizing recompilation can give you a nice boost in performance.

In a recent article I wrote about ways to identify and monitor stored procedure recompilation in SQL Server 2000. That article concentrated on figuring out if recompiles are a problem and if so, where they were occurring. As promised, this article shows how to change your stored procedure to cut the problem down to size. But you should recognize that there's a good reason for some of the recompiles and you shouldn't try to eliminate all of them.

We'll start with a short review of what constitutes a recompile, why you might want to minimize them, and how we measure them.

Recompilation Review

The first time that SQL Server is asked to execute a stored procedure, user-defined function, or trigger it has to convert the definition of the object into an execution plan. Execution plans are SQL Server's version of compiled code and I'll refer to the process of creating the plan as compilation.

SQL Server has what's called a cost based optimizer. That is it uses information about the values in the data columns to produce the best plan. Figuring out the best plan can consume a moderate amount of CPU resources and require that various database objects be locked while the recompile takes place. Frequent recompiles can be an important reason why a database is considered slow.

Plan creation, aka compilation, takes into account the estimated cost of executing each of the queries. The optimization algorithm takes into account:

  • The columns and base tables to be accessed
  • The joins, grouping, WHERE clause, and ORDER BY clause.
  • The available indexes on the tables
  • The distribution of data in columns that are referenced in the query

SQL Server gets information about the data distribution from the statistics that it keeps for just this purpose. As the distribution of data changes, the optimal plan may also change. For example, the optimal plan for very small tables is to do a complete table scan, no matter what the query requests, because the scan requires the minimum amount of I/O. As the table grows, the optimal approach will change to one that uses an index. Depending of course, on which columns are indexed. The point is that the optimal plan changes as the data changes.

Execution plans aren't stored permanently. Instead, only the textual definition is kept permanently in syscomments. It holds the CREATE or ALTER statement that defined the object. As the code is compiled a copy of the execution plan is saved in the master.syscacheobjects table where it is used and becomes available for reuse.

Aggregate numbers of recompiles can be monitored using the Windows Performance Monitor. Discovering which statements are causing recompiles is best done with the SQL Profiler. Last month's article and the videos that accompany them show how to use both of these tools.

SQL Server usually uses the plan in the cache. However, sometimes, it can't and it must recompile. Sometimes it decides that a better plan might be found by creating a new plan. The next section summarizes the reasons and discusses what to do about them.

Next Page (2)  :  Last Page (3)


RSS as HTML

Personal Blog

 
New Tips:

Use dsinit to set the SQL Server instance for Windows Azure dev storage

Upcoming
Presentations:

SQL PASS
Nov 7, '12
Biggest
Loser: DB
Edition


Full Schedule