Documentation forServer & Application Monitor
Monitoring your applications and environment is a key capability of Hybrid Cloud Observability and is also available in a standalone module, Server & Application Monitor (SAM). Hybrid Cloud Observability and SAM are built on the self-hosted SolarWinds Platform.

SQL Server

This widget shows the status of SQL server counters that warrant a view independent of grouped widgets.

The icons of the listed performance counters in the Statistic Name column changes color to reflect the listed counter's current state. Clicking any performance counter in the Statistic Name column takes you to the Performance Counter Details page for that counter.

If the value of a listed performance counter approaches or crosses the Warning threshold, a yellow band will appear for the listed counter allowing plotted values that cross the threshold to easily be visualized against the yellow band. If the value of a listed performance counter approaches or crosses the Critical threshold, a red band will appear for the listed counter allowing plotted values that cross the threshold to easily be visualized against the red band. Hovering over any time period within the chart will display detailed information about that time period in the tooltip. Hovering over a counter will give you detailed information about that performance counter.

Counter Expert Knowledge

Batch Requests/sec

This performance counter returns the number of Batch Requests that SQL Server receives per second.

The values this monitor returns generally follows in step as to how busy your server's CPUs are. From a network bottleneck approach, a typical 100Mbs network card is only able to handle about 3,000 batch requests per second.

Potential Issue: Typically, over 1,000 batch requests per second indicates a busy SQL Server. In this situation, you may experience a CPU bottleneck. This is a relative number of batch requests. The more advanced your hardware, the more batch requests per second your SQL Server can handle.

Low Batch Requests/Sec can be misleading. A SQL Statements/sec counter would be a more accurate measure of the amount of SQL Server activity. For example, an application may call only a few stored procedures with each stored procedure completing a large amount of processing. In this case, you see a low number for Batch Requests/sec, but each stored procedure (one batch) executes many SQL statements that drive up CPU and other resources. As a result, many counter thresholds based on the number of Batch Requests/sec seems to identify issues because the batch requests on such a server are unusually low for the level of activity on the server.

Resolution: Check your SQL server and verify system resources are being used efficiently. Applications other than SQL may be using unnecessary system resources. Try and recover memory by closing unnecessary applications. Installing additional memory and upgrading your hardware should solve this problem.

SQL Compilations/sec

This performance counter returns the number of times per second that SQL Server compilations have occurred.

This value should be as low as possible.

Potential Issue: If you see a high value, say above 100, then this can be an indication that there are a great deal of ad hoc queries that are running which may cause increased CPU usage.

Resolution: Re-write the running ad hoc queries as stored procedures or use the following command: sp_executeSQL.

SQL Recompilations/sec

This performance counter returns the number of SQL statement recompiles that are triggered per second.

When an execution plan is invalidated due to some significant event, SQL Server re-compiles it. Re-compiles like compiles, are expensive operations so you want to minimize the number of re-compiles. Ideally you want to keep this counter less than 10% of the number of Compilations/Sec. In other words, keep this value as low a possible.

In SQL Server 2000, when SQL Server recompiles a stored procedure, the entire stored procedure is recompiled, not just the statement that triggered the recompilation. In SQL Server 2008 and SQL Server 2005 SP3, the behavior is changed to statement-level recompilation of stored procedures. When SQL Server 2008 or SQL Server 2005 SP3 recompiles stored procedures, only the statement that caused the recompilation is compiled, not the entire procedure. This uses less CPU bandwidth and results in less contention on lock resources such as Compile locks.

Potential Issue: Returned values that are high can indicate deadlocks and compile locks that are not compatible with any locking type.

Resolution: Recompilation can happen for various reasons, such as: Schema changed; Statistics changed; Deferred compile; Set option changed; Temporary table changed; Stored procedure created with the Recompile query hint or using the Option (Recompile).

  • If the recompile occurred because a Set option changed, use SQL Server Profiler to determine which Set option changed. Avoid changing Set options within stored procedures. It is better to set them at the connection level. Ensure that Set options are not changed during the lifetime of the connection.
  • Recompilation thresholds for temporary tables are lower than for normal tables. If the recompiles on a temporary table are due to statistics changes, you can change the temporary tables to table variables. A change in the cardinality of a table variable does not cause a recompilation. The drawback of this approach is that the query optimizer does not keep track of a table variable’s cardinality because statistics are not created or maintained on table variables. This can result in non-optimal query plans. Test the different options and choose the best one.
  • Another option is to use the Keep Plan query hint. This sets the threshold of temporary tables to be the same as that of permanent tables. The EventSubclass column indicates “Statistics Changed” for an operation on a temporary table.
  • To avoid recompilations that are due to changes in statistics (for example, when the plan becomes suboptimal due to change in the data statistics), specify the KeepFixed Plan query hint. With this option in effect, recompilations can only happen because of correctness-related reasons (for example, when the underlying table structure has changed and the plan no longer applies) and not due to statistics. An example might be when a recompilation occurs if the schema of a table that is referenced by a statement changes, or if a table is marked with the sp_recompile stored procedure.
  • Turning off the automatic updates of statistics for indexes and statistics that are defined on a table or indexed view prevents recompiles that are due to statistics changes on that object. Note, however, that turning off the Auto-Stats feature by using this method is usually not a good idea. This is because the query optimizer is no longer sensitive to data changes in those objects and suboptimal query plans might result. Use this method only as a last resort after exhausting all other alternatives.
  • Batches should have qualified object names (for example, dbo.Table1) to avoid recompilation and to avoid ambiguity between objects.
  • To avoid recompiles that are due to deferred compiles, do not interleave DML and DDL or create the DDL from conditional constructs such as If statements.
  • Run Database Engine Tuning Advisor (DTA) to see if any indexing changes improve the compile time and the execution time of the query.
  • Check to see if the stored procedure was created with the With Recompile option or if the Recompile query hint was used. If a procedure was created with the With Recompile option, in SQL Server 2005 SP3, you may be able to take advantage of the statement level Recompile hint if a particular statement within that procedure needs to be recompiled. This would avoid the necessity of recompiling the whole procedure each time it executes, while at the same time allowing the individual statement to be compiled.

Auto-Param Attempts/sec:

This monitor returns the number of auto-parameterization attempts per second.

The total for this monitor should be the sum of the failed, safe, and unsafe auto-parameterizations. Auto-parameterization occurs when an instance of SQL Server tries to parameterize a Transact-SQL request by replacing some literals with parameters so that reuse of the resulting cached execution plan across multiple similar-looking requests is possible. Note that auto-parameterizations are also known as simple parameterizations in newer versions of SQL Server. This counter does not include forced parameterizations.

Potential Issue: Bad T-SQL coding practices can increase recompilation frequency and adversely affect SQL Server's performance. Such situations can be debugged and corrected in many cases.

Longest Transaction
Running Time

The performance counter displays the length of time, in seconds, the transaction that has been running the longest, has been active.

Transactions that run longer than others use more resources. They can be used to track down procedures and calls that are taking longer than expected by identifying the specific transaction(s).

Potential Issue: Long running transactions can prevent truncation of transaction logs. This can cause the transaction log files to grow until they consume all available physical disk space shutting down access to the database.

Resolution: Check the functionality of the query and/or redesign the long running transaction.

Failed Auto-Params/sec

This monitor returns the number of failed auto-parameterization attempts per second.

The value returned by this monitor should be low. Note that auto-parameterizations are also known as simple parameterizations in later versions of SQL Server. Preferred values should be near zero.

Resolution: Bad T-SQL coding practices can increase recompilation frequency and adversely affect SQL Server's performance. Such situations can be debugged and corrected in many cases.

Full Scans/sec

This performance counter returns the number of Full Scans on base tables or indexes.

This is the number of unrestricted full scans per second. These can be either base-table or full-index scans.

Potential Issue: Values greater than 1 or 2 indicate table/Index page scans are occurring. If the CPU is running high, you should investigate the cause as related to this counter measurement. You can rule this out as a problem if the full scans are on small tables.

Resolution: Following are a few of the main causes of high Full Scans/sec:

  • Missing indexes
  • Too many rows requested; Queries with missing indexes or too many rows requested have a large number of logical reads and an increased CPU time.
  • Scans are IO-intensive and should run within your databases minimally. Identify the tables that have a large number of scans against them. Review the fill factor you have set up on your indexes and minimize it where appropriate.

Range Scans/sec

This performance counter returns the number of Qualified Range Scans through indexes per second.

This monitor returns the number of qualified range scans through indexes per second.

Plan Re-Use

A query plan is used to execute a query.

Plan re-use is desirable for Online Transaction Processing (OLTP) workloads because re-creating the same plan (for similar or identical transactions) is a waste of CPU resources.

Potential Issue: Zero cost plans are not be cached (not re-used) in SQL 2005 SP2. Applications that use zero cost plans have a lower plan re-use but this is not a performance issue.

Resolution: Review your plan re-use design. Consider using the free SolarWinds SQL Plan Warning tool.

Probe Scans/sec

This performance counter returns the number of Probe Scans, per second, that are used to find at most, one single qualified row in an index or base table directly.

There are no thresholds associated with this performance counter.

Recompilations/ Compilations

This performance counter shows the ratio of SQL Recompilations to SQL Compilations.

SQL Recompilations should be less than 10% of SQL Compilations.

Potential Issue: Returned values that are high can indicate more temporary tables in use.

Resolution: Change stored procedures to not change schemas, Use table variables instead of temporary tables

Compilations/
Recompilations/sec

Monitoring the number of query compilations and recompilations and the number of batches received by an instance of SQL Server gives you an indication of how quickly SQL Server is processing user queries and how effectively the query optimizer is processing the queries.

Compilation is a significant part of a query's turnaround time. to save the compilation cost, the Database Engine saves the compiled query plan in a query cache. The objective of the cache is to reduce compilation by storing compiled queries for later reuse, therefore ending the requirement to recompile queries when later executed. However, each unique query must be compiled at least one time.

Compilations/sec divided by 10 minus recompilations/sec.

Potential Issue: Query recompilations can be caused by the following factors:

  • Schema changes, including base schema changes such as adding columns or indexes to a table, or statistics schema changes such as inserting or deleting a significant number of rows from a table.
  • Environment (Set statement) changes. Changes in session settings such as Ansi_Padding or Ansi_Nulls can cause a query to be recompiled.

Zooming

You can have the chart show a predetermined time period of data by clicking on any one of the three Zoom buttons in the head of the Zoom column. Alternatively, you can have the chart show a specific date range by dragging the mouse over an area of the chart. The time and date displayed at the top of the widget shows the amount of data currently loaded into the chart. This can be changed by clicking Edit from within the widget.

The Value from Last Poll column shows the most recent data available for the listed statistic.