Documentation forServer & Application Monitor
Monitoring your applications and environment is a key capability of Hybrid Cloud Observability and is also available in a standalone module, Server & Application Monitor (SAM). Hybrid Cloud Observability and SAM are built on the self-hosted SolarWinds Platform.

Storage

This widget contains monitors specific to the current storage status of the current database. The icons of the listed performance counters in the Statistic Name column change colors to reflect the listed counter's current state. Clicking any performance counter in the Statistic Name column takes you to the Performance Counter Details page for that performance counter.

If the value of a listed performance counter crosses the Warning threshold, the chart for the listed counter displays a yellow background. If the value of a listed monitor crosses the Critical threshold, the chart for the listed counter displays a red background. Hovering over any time period within the chart displays detailed information about that time period in the tooltip. Hovering over a counter gives you detailed information about that performance counter.

Counter Expert Knowledge

Backup/Restore Throughput/sec

This performance counter shows the Read/Write throughput for backup and restore operations of a database per second.

You can measure how the performance of the database backup operation changes when more backup devices are used in parallel, or when faster devices are used. Throughput of a database backup or restore operation allows you to determine the progress and performance of your backup and restore operations. There are no thresholds associated with this performance counter.

Log Cache
Reads/sec

This performance counter returns the amount of Reads performed per second through the Log Manager cache.

Total Size

This performance counter returns the total size of the database including white space.

Potential Issue:

Running out of storage space.

Resolution: Shrink the database if free space is running low.

Log File(s) Size

This performance counter shows the cumulative size (in kilobytes) of all the transaction log files in the database.

Every transaction in the database is logged to the Transaction log. This is useful during recovery in the event of data loss. Information about the log file is shown under the Files section of the Database Overview page in SQL Server Management Studio.

The size of this file is based on the logging level set for the database. By default, a new database uses Full Recovery Mode or the recovery mode that the model database is set up as. Transactions in the log file are only removed when a complete backup or transaction log backup is initiated. This enables pinpoint recovery for critical applications. Be aware that if the transaction log is not backed up on a regular basis, the log grows until your disk is completely full. The transaction log should be adjusted to a reasonable size. This depends on the number of transactions you expect, and how often you perform backups.

Set the correct Autogrowth properties: The default of 10% Autogrowth for data and log files should be enough for low use databases. Conversely, a 500 MB Autogrowth rate may be better suited for a busy database, preventing a large I/O impact caused by normal Autogrowth operations.

Possible problems:
If the returned value occasionally drops to zero, this can indicate a problem with the following file: sys.master_files. This file is what the SQL Monitor uses to collect data.

With Simple Recovery Model, fast writing to the transaction log triggers Autogrowth.

Potential Issue: If performing an insert operation, consider switching the recovery model to Bulk Logged for the insert.

  • (Recommended) If you do not need pinpoint recovery, switch to Simple Recovery model.
  • If your database is configured with the Full or Bulk Logged recovery model, back up the log on a regular basis so it can be truncated to free up log space.

This action removes inactive virtual log files, but does not reduce the file size.

Data File(s) Size

This performance counter shows the cumulative size (in kilobytes) of all the data files in the database including any automatic growth.

Monitoring this counter is useful for determining the correct size of tempdb. The value returned from this monitor reports the size of your database (not including the Transaction log). Information about distinct primary data files and secondary data files is shown under the Files section of the Database Overview page in SQL Server Management Studio.

Having a file size increase on demand is expensive and degrades performance significantly. Autogrowth should only be used to let a database increase in size if you are running out of disk space. Autogrowth should not be used to manage your MDF size increases. Data files should be sized properly when they are initially created to meet the expected growth. This also helps avoid fragmentation, which leads to better database performance.

Potential Issue: Be wary of large and unusual increases to the size of your data files. Generally speaking, your database should be sized to minimize Autogrowth. An increase in size is expensive concerning I/O. Additionally, this fragments your data and log files. If the returned value occasionally drops to zero, this can indicate a problem with the following file: sys.master_files. This file is what the SQL Monitor uses to collect data.

Resolution:

  • Ensure that Full Recovery Mode is used for your databases and that you regularly backup everything, including the transaction log.
  • Manually add space as data increases in size. It is recommended that you have approximately one year of space available for data growth. If this is not possible, move the database to a higher capacity drive or simply upgrade the drive.

    Compressing files causes fragmentation and is therefore not recommended.

Shrink Space

This is essentially the white space in the database that can be reclaimed by compacting/shrinking the database.

Shrinking data files recovers space by moving pages of data from the end of the file to unoccupied space closer to the front of the file. When enough free space is created at the end of the file, data pages at end of the file can de-allocated and returned to the file system.

Potential Issue: Data that is moved to shrink a file can be scattered to any available location in the file. This causes index fragmentation and can slow the performance of queries that search a range of the index.

Resolution: To eliminate the fragmentation, consider rebuilding the indexes on the file after shrinking.

Average Bytes/Read

Perfmon captures the total number of bytes sent to the disk (write) and retrieved from the disk (read) over a period of one second.

The Disk Read Bytes/sec and the Disk Write Bytes/sec performance counters break down the results displaying only read bytes or only write bytes, respectively.

Average Bytes/Write

Perfmon captures the total number of bytes sent to the disk (write) and retrieved from the disk (read) over a period of one second.

The Disk Read Bytes/sec and the Disk Write Bytes/sec performance counters break down the results displaying only read bytes or only write bytes, respectively.

Average Read Latency

This performance counter indicates the response time of the disk – how long it takes for a read request to get acknowledged by the disk.

The average read latency should be less than 20ms for database files.

Average Write Latency

Avg. Disk sec/Write is the average time, in seconds, to write data to the disk.

This analysis determines if any of the physical disks are responding slowly.

Potential Issue: If the response times are greater than .015 (15 milliseconds), then the disk subsystem is keeping up with demand, but does not have much overhead left.

If the response times are greater than .025 (25 milliseconds), then noticeable slow-downs and performance issues affecting users may be occurring.

Forwarded Records/sec

This performance counter returns the number of records per second fetched through forwarded record pointers.

Heaps have one interesting feature - forwarded records. If a record needs to be updated and the updated record size is greater than the current record size, and there is no space on the page to fit the new record in, then we have two options:

  1. Move the record to a new page and change all the non-clustered index records that point to it to point to the new location of the record.
  2. Move the record to a new page and leave a forwarding record in the original location to point to the new location.

Potential Issue: This is one drawback of using heaps - all the extra space that is wasted with the Forwarding/Forwarded Records. Another drawback is that when scanning through the heap, forwarding records have to be followed immediately (as opposed to ignoring them and just reading the forwarded records when they are encountered). This is to vastly reduce the possibility of read anomalies such as non-repeatable reads or missed rows if a row moves before the scan point during a scan.

Resolution: Once a data table (heap) includes forward pointers, there is only one way to get rid of them : Table Reorg. There are a few options to do this:

  • The simplest one would be to create a clustered index on the data table and drop it again.
  • To avoid forward pointers entirely, create a clustered index from the beginning. A clustered index keeps the data rows in its leaf node level. Therefore the data is always sorted according to the index keys and forward pointers are not used. This is like a continuous online Reorg in this regard.

Log Cache Reads/sec

This performance counter displays the number of Reads performed per second through the Log Manager cache.

There are no thresholds associated with this performance counter.

Log Cache Hit Ratio

This performance counter returns the percentage of Log cache reads satisfied from the Log cache.

The log cache is used to hold information that is written to the transaction log. There are no thresholds associated with this performance counter.

Number of Reads/sec

This performance counter returns the number of physical database page reads issued per second.

This statistic displays the total number of physical page reads across all databases. 80 to 90 per second is normal. Anything that is above 90 indicates indexing or memory constraint.

Potential Issue: Because physical I/O is expensive, you may be able to minimize the cost, either by using a larger data cache, intelligent indexes, and more efficient queries, or by changing the database design.

Resolution: Attempt to tune the application so that fewer I/O operations are required. For example, perhaps I/O would be reduced if there were appropriate indexes or if the database design were de-normalized. If the applications cannot be tuned, you need to acquire disk devices with more capacity. Compare to the Memory: Pages/sec performance counter to see if there is paging while the SQL Server:Buffer Manager\Page reads/sec is high. Note: Before adjusting the fill factor, at a database level, compare the SQL Server:Buffer Manager\Page reads/sec performance counter to the SQL Server:Buffer Manager\Page writes/sec counter, and use the fill factor option only if writes are a substantial fraction of reads (greater than 30 percent).

Number of Writes/sec

This performance counter returns the number of data pages written per second.

Page writes per second are the total number of physical writes, including checkpoint writes, lazy writer writes, and large block writes done during index creation or bulk copy operations.

Potential Issue: If Page Writes/sec is high, this might indicate a SQL Server performance issue.

Resolution: If these values are high, you can try to reduce the values by tuning your application or database to reduce I/O operations such as index coverage, better indexes, normalization, increasing the I/O capacity of the hardware, or by adding memory.

Total Size: Percent
Change 24 Hrs

This value is the same as the value returned by the performance counter, Total Size but calculated as a percentage of change over the last 24 hours.

This metric allows users to be alerted when the database experiences sudden growth.

Potential Issue: A database that suddenly changes in size by a high percentage could be indicative of a problem that should be investigated.

Zooming

You can have the chart show a predetermined time period of data by clicking on any one of the three Zoom buttons in the head of the Zoom column. Alternatively, you can have the chart show a specific date range by dragging the mouse over an area of the chart. The time and date displayed at the top of the resource shows the amount of data currently loaded into the chart. This can be changed by clicking Edit from within the resource.

The Value from Last Poll column shows the most recent data available for the listed statistic.