Best practices for SAM templates

Creating SAM application monitor templates involves more than just adding and configuring component monitors. Use these best practices, tips, and tricks about performance enhancements, testing, and scripting to create and customize templates and component monitors.

In addition to these tips, note the following details about SAM templates:

  • To effectively monitor Linux/Unix systems with the Orion agent for Linux, the agent must be installed on the target Linux machine, the node must be managed using the agent within the Orion Platform, and your environment must be configured properly. See Configure Linux/Unix systems for the Orion agent for Linux in the SAM Template Reference.
  • SolarWinds recommends that you check THWACK periodically for updates to SAM's out-of-the-box (OOTB) templates. With the exception of AppInsight templates, templates are not updated automatically during upgrades to avoid overwriting custom changes made to templates. For details, see Import and export SAM templates.

Periodically, SolarWinds releases SAM templates to support the latest product versions such as Microsoft Server 2019. You can continue using templates for older product versions, but updating to the latest template is recommended.

Performance enhancements

Modify the polling frequency for performance

Depending on the length of calls and amount of data pulled for a monitor, you may want to modify the frequency. For script monitors you may need to only run the script once per day or once per week. For example, to compare MIBs using the SolarWinds MIB Database template, you may only need to run the comparison once a day or week.

Extend the polling timeout for long calls

For scripts with lengthy calls for large amounts of data, extend the polling timeout. The default 300 seconds may not be long enough for script processing to complete. If the call may take more time, especially during peak times, increase the timeout to give the system time to complete the call. For example, for MIB database comparison scripts using the SolarWinds MIB Database template, multiple files are called, downloaded, and compared to return status messages and complete specific actions.

Enhance latency and performance by pulling multiple metrics per template

When executing script component monitors in a template, SAM affects performance and latency making calls to a target server. Complete calls for up to 10 metrics per script to reduce the number of calls, increasing performance. Depending on the size and processing of scripts, balance scripts and lengthy calls across multiple instances of a script monitor.

Script, monitor, and template testing

Check credentials and server permissions for scripts

Verify you have the correct credentials with assigned account permissions to execute scripts on the Orion Web Console and target server. Issues with scripts tend to be with credentials. The script monitors may provide fields for credentials, or you may need to provide credentials in the script code, arguments, or command line. Test the script in SAM prior to verify credentials and access.

Test scripts before monitoring

When adding and configuring script component monitors, you need to test the script. When the test completes, SAM registers each returned metric as a numbered output in the Orion SQL Database. You can configure the display of collected metrics and values through the component monitor. Each script monitor supports up to 10 different outputs.

Receive accurate node status

Until tested, scripts and component monitors return an initial unknown status. After testing, polling returns accurate application status.

Script best practices

To learn more about scripting, see the SAM Custom Template Guide.

Use code comments

Code comments help document the intent for code, decisions made, and to track changes. SolarWinds recommends using code comments to keep detailed steps and responses in your code. If additional administrators need to work in the script monitors, the comments provide context for the code.

# for a comment per line.
For lengthy comments per code section.

Do not use positional parametrization

In the command line for executing scripts, always add the parameter per value. Do not assume the position of data in the command dictates the parameter. For example, use -h for hostname.

Use a header for writing multiple scripts

Create a header in your code to reuse throughout your scripts. The header could include example code and code comments for:

  • A listed of exit codes
  • Set variables for return metrics commonly used in your scripts
  • Use code to determine if you are testing code on the target server or the Orion system

    For example, the following PowerShell code returns a message identifying if the server is a test system or the Orion server:

    Additionally, you could add a step to save the code if not on the Orion server.

Use SolarWinds macros

When using SolarWinds macros, consider assigning them to named variables in your scripts.

The following SolarWinds macros are available for Linux/Unix, Nagios, Windows Script, and PowerShell script monitors:

  • ${USER}
  • ${PORT}
  • ${Node.SysName}
  • ${Node.Caption}
  • ${Node.DNS} - Use this instead of ${IP}.
  • ${Node.ID}
  • ${Component.ID}
  • ${Component.Name}
  • ${Application.Id}
  • ${Application.Name}
  • ${Application.TemplateId
  • ${Threshold.Warning}
  • ${Threshold.Critical}
  • Node Custom Property Macros ${Node.CustomPropertyName}
  • Application Custom Property Macros ${Application.CustomPropertyName}

For agent monitored nodes, use the macros ${Node.SysName} and ${Node.DNS}. The ${IP} may return a loopback IP before polling starts.

Report status through exit codes

Scripts must report their status by exiting with the appropriate exit code. The exit code is used to report the status of the monitor, which is seen by the user through the interface

A script should return an exit code which results in an Up (0), Warning (2), or Critical (3) status. When one of these exit codes is received the appropriate dynamic evidence table structure is created and all further exit codes are handled correctly. If the component only returns Down (1) or Unknown (4) on first use, the appropriate dynamic evidence table structure is not created appropriately.

You must test the component monitor after entering the script to properly calibrate the monitor, generate tables, and verify correct communication between the target node, SAM, and the template.

  • 0 - Up
  • 1 - Down
  • 2 - Warning
  • 3 - Critical
  • Any other value - Unknown, for example 4

Multiple options for returning exit code and message

You can return one of multiple options for exit codes and messages using IF/ELSE or case statements for your scripts.

Use error trapping to capture issues

Using error trapping code such as try/catch blocks help capture and report errors. These blocks provide better reporting of an error with detailed information for the issue.