Documentation forOrion Platform

Scalability Engine Guidelines for SolarWinds products

Your Orion Platform installation consists of a Main Polling Server (Orion Web Console and the Main Polling Engine), and the Orion Database Server. The polling engine gathers device statistics and stores the information on the Orion Database Server. The Main Polling server reads the stored information from the Orion Database Server.

  Check out this short video (0:58) on the Enterprise-class scalability of Orion products.

Your Main Polling Engine polls a definite number of elements depending on the Orion Platform product. Too many elements on a single polling engine can have a negative impact on your SolarWinds server. When the maximum polling throughput on a single polling engine is reached, the polling intervals are automatically increased to handle the higher load. To keep default polling intervals, you need to add polling capacity.

What is a scalability engine?

"Scalability engine" is a general term that refers to any server that extends the monitoring capacity of your SolarWinds installation, such as Additional Polling Engines (APEs), Additional Web Servers (AWS), or High Availability (HA) backups.

How do I know that I need to scale my Orion Platform product?

When the polling capacity on a polling engine is getting close to the limit or is exceeded, Orion Platform products notify you about it.

  • See Notifications in the Orion Web Console.

    notificationspollingonly.png

  • Review your alerts. If the Polling rate limit exceeded out-of-the-box alert is enabled, the alert sends an email and adds an entry to All Active Alerts.

    pollinglimitexceededalert.png

Select the scalability option suitable for your environment and deployed Orion Platform products

Review the scalability options and compare them with options available for Orion Platform products you have deployed.

Scalability Supported products When to select this option
Additional Polling Engine (APE) All Orion Platform products

To increase the polling capacity of your deployment, deploy APEs.

  • Centralized Deployment: Poll from the main data center. Add APEs to your remote sites to enhance visibility. » Learn more

  • Distributed Deployment: Install an Orion Platform instance in each remote site and aggregate the data with EOC for a consolidated view. » Learn more

Free Poller
  • LA
  • SAM with node-based licenses
  • SCM
  • SRM
  • VMAN

An additional polling engine that does not require an extra license.

You cannot stack free poller licenses.

Additional Web Server All Orion Platform products To improve the performance of your Orion Web Console by load-balancing a large number of users or in secured environments where you have your Orion Platform products behind a firewall, deploy Additional Web Servers. » Learn more
Stacking licenses
  • NPM 12.2 and later
  • SAM 6.9 and later with component-based licenses

If you have enough resources on your polling engine server, apply multiple licenses on the server to increase its polling capacity. » Learn more

Orion Remote Collector

(ORC)

  • NAM 2020.2.1
  • SAM 2020.2.1 with node-based licenses

To securely monitor offices over low bandwidth and high latency connections or small offices in remote locations, deploy an Orion Remote Collector. » Learn more

Remote Orion Poller for Additional Polling Engines

(ROP)

  • NPM
  • SAM
  • SRM
  • UDT
  • VNQM
  • WPM

To deploy your Orion Platform product in numerous remote locations without scaling up your installation, deploy a Remote Office Poller for Additional Polling Engine (ROP, mini-poller). » Learn more

High Availability All Orion Platform products except for ETS

To implement failover protection for your Orion Server and additional polling engines, deploy High Availability.

Deploy the selected scalability option

Review scalability requirements and find out more about scalability options.

Centralized Deployment with Additional Polling Engines

Centralized Deployment with APEs polls data locally in each region and the polled data is stored centrally on the database server in the primary region. All licenses are shared in a Centralized Deployment. Use this deployment if your organization requires centralized IT management and localized collection of monitoring data.

Why deploy APEs centrally?

Users can view all network data from the Orion Web Console in the Primary Region where the main SolarWinds Orion server is installed.

Users can log in to a local Web Console if an Additional Web Server is installed in a secondary region.

With Centralized Deployment, you can:

  • Add, delete, and modify nodes, users, alerts and reports centrally, on the Main Orion Server.
  • Scale all installed Orion Platform products. Scaling one Orion Platform product increases the capacity of the other Orion Platform products. For example, installing an APE for NPM also increases the polling capacity for SAM.
  • Specify the polling engine that collects data for monitored nodes and reassign nodes between polling engines.

All Key Performance Indicators (KPIs), such as Node Response Times, are calculated from the perspective of the polling engine. For example, the response time for a monitored node in Region 2 is equal to the round trip time from the APE in Region 2 to that node.


For additional information on Centralized Deployment, see the SolarWinds Orion Platform Scalability Tech Tip.

Requirements for APEs

The latency (RTT) between each Additional Polling Engine and the database server should be below 200 ms. Degradation may begin around 100 ms, depending on your utilization. Ping the Orion SQL Server to find the current latency. A reliable static connection between the server and the regions.

  1. Make sure that your environment meets the requirements for Additional Polling Engines in Multi-module system guidelines. Extra large environment requirements include:

    • Amazon Web Service: m5.xlarge

    • Microsoft Azure: D4s_v3

    • On premise:

      • Quad core processor or better
      • 32 GB RAM
      • Storage: 150 GB, 15,000 RPM
      • 1 x 1 Gb dedicated NIC
      • Windows Server 2019 or 2016, Standard or Datacenter Edition

  2. Make sure that you have opened all necessary ports:

    Additional Polling Engines have the same port requirements as the Main Polling Engine. The following ports are the minimum required for an Additional Polling Engine to ensure the most basic functions.

    Port Proto-
    col
    Service/
    Process
    Direction Description
    161 UDP SolarWinds Job Engine Outbound The port used for sending and receiving SNMP information.
    162 UDP SolarWinds Trap Service Inbound The port used for receiving trap messages.

    1433

    TCP

    SolarWinds Collector
    Service
    Outbound The port used for communication between the APE and the Orion database.
    1434 UDP SQL Browse Service Outbound The port used for communication with the SQL Server Browser Service to determine how to communicate with certain non-standard SQL Server installations. Required only if your SQL Server is configured to use dynamic ports.
    1801 TCP Message Queuing WCF Bidirectional The port used for MSMQ messaging from the Main Polling Engine to the Additional Polling Engine.

    5671

    TCP

    RabbitMQ Outbound

    The port used for SSL-encrypted RabbitMQ messaging from the Main Polling Engine to the Additional Polling Engine.

    17777

    TCP

    SolarWinds Information
    Service
    Bidirectional

    The port used for communication between the Additional Polling Engine and the Main Polling Engine.

  3. Use the Orion Installer to deploy the Additional Polling Engine.

How to deploy?

  1. Make sure that your environment meets the requirements for APEs.

  2. Use the Orion Installer to deploy the Additional Polling Engine.

Distributed Deployment with Main and Additional Polling Engines in regions

In a Distributed Deployment, each region is licensed independently, and data is polled and stored locally in each region. Scale each region independently by adding APEs. You can access monitoring data from each region in a central location with the Enterprise Operations Console (EOC).

SolarWinds Enterprise Operations Console must be installed and licensed if you want to view aggregated data from multiple SolarWinds Orion servers in a Distributed Deployment.

Why deploy Additional Polling Engines in a distributed environment?

With Distributed Deployment you can:

  • Use local administration to manage, administer, and upgrade each region independently.
  • Create, modify, or delete nodes, users, alerts, and reports separately in each region.
  • Export and import objects, such as alert definitions, Universal Device Pollers, and SAM application monitor templates between instances.
  • Mix and match modules and license sizes as needed. For example:
    • Region 1 has deployed NPM SL500, NTA for NPM SL500, UDT 2500, and 3 APEs
    • Region 2 has deployed NPM SLX, SAM500, UDT 50,000, and 3 APEs
    • Region 3 has deployed NPM SL100 only and 3 APEs

EOC 2.0 and later leverage a function called SWIS Federation to query for specific data only when needed. This method allows EOC to display live, on-demand data from all monitored SolarWinds Sites. and does not store historical data.

How to deploy?

  1. Make sure that your environment meets the requirements for APEs.

  2. Use the Orion Installer to deploy the Additional Polling Engine.

Additional Web Servers

Deploying an Additional Web Server might be helpful in the following cases:

  • The number of users logged in to the Orion Web Console at the same time is close to 50.

  • Orion Web Console is having performance issues.

    The Orion Web Console performance depends on the performance of the computer where you open the browser. See Browser requirements in the latest system requirements.

How to deploy?

  1. Make sure you have all ports required for Additional Web Servers open:

    Port Protocol Service/Process Direction Description

    80

    TCP

    World Wide Web Publishing Service Inbound

    Default additional web server port. Open the port to enable communication from your computers to the Orion Web Console.

    If you specify any port other than 80, you must include that port in the URL used to access the web console. For example, if you specify an IP address of 192.168.0.3 and port 8080, the URL used to access the web console is http://192.168.0.3:8080.

    443 TCP IIS Inbound The default port for https binding.

    1433

    TCP

    SolarWinds Information Service Outbound

    The port used for communication between the SolarWinds server and the SQL Server. Open the port from your Orion Web Console to the SQL Server.

    1801 TCP Message queuing Outbound The port used for MSMQ messaging from the Additional Web Server to the Main Polling Engine.
    5671 TCP RabbitMQ Outbound

    The port used for SSL-encrypted RabbitMQ messaging from the Additional Web Server to the Main Polling Engine.

    17777

    TCP

    SolarWinds Information Service Outbound

    Orion module traffic. Open the port to enable communication from the Main Polling Engine to the Additional Web Server, and from the Additional Web Server to the Main Polling Engine.

  2. Use the Orion Installer to deploy the Additional Web Server. See Installing Additional Polling Engines.

Learn more: Optimize the performance of Orion Web Console.

Remote Office Pollers

To deploy your Orion Platform product in numerous, remote locations when you do not need to scale up your installation, use a Remote Office Poller for Additional Polling Engine (ROP, mini-poller).

Select a Remote Office Poller by the number of elements you need to poll:

  • ROP250 polls up to 250 elements.
  • ROP1000 polls up to 1000 elements.

How to deploy?

  1. Make sure that all deployed products support this option.

  2. Make sure that your environment meets the requirements for APEs.

  3. Use the Orion Installer to deploy Remote Office Pollers. follow the steps for installing Additional Polling Engines.

Orion Remote Collectors

Orion Remote Collector (ORC) is a lightweight distributed polling engine that you can use to monitor devices in your environment agentlessly through WMI and SNMP.

Why deploy?

  • ORCs use the Agent technology to communicate with the Orion Platform
  • ORCs do not need a direct connection to the database
  • ORCs are easy to deploy in remote locations, thanks to their simplified architecture
  • ORCs can poll/cache over unreliable networks (store up to 24 hours with no connection to the polling engine)

Requirements

Requirement Description
Supported operating systems
  • Windows Server 2012 R2
  • Windows Server 2016
  • Windows Server 2019
  • Windows 10
  • Not Supported on Linux and AIX
  • The server where you deploy the agent must have a unique hostname.
Ports to open

Open only one port.

The ORC agent is NAT friendly, supports authenticated proxy traversal. You can thus easily deploy the Remote Collector in your DMZ, branch office locations, and even in the cloud, with very few or no firewall policy changes.

Orion Platform products that support Orion Remote Collector
Other requirements
  • Agents installed on the Orion Server CANNOT be promoted to Remote Collectors.
  • Only Agent-initiated mode is supported (Server-initiated mode is NOT supported).
  • The Agent server must have .NET 4.8 or later installed.

What is supported?

Orion Remote Collector (ORC) does not provide full support for all metrics polled by Additional Polling Engines. For more details, see Orion Remote Collector support.

Product supporting ORC Supported features

NAM 2020.2.1

(IPAM, NCM, NPM, NTA, UDT, VNQM)

Common Orion Platform metrics

  • Node CPU
  • Node Memory
  • Node Status
  • Node Response
  • Volumes

NPM

  • Device Studio
  • Device View
  • Duplex Mismatch
  • FiberChannel
  • Hardware Health
  • Interfaces
  • Multicast
  • NEC
  • Nexus
  • Routing & VFR
  • Switch Stack
  • Topology
  • UnDP
  • VLANs
  • Wireless
  • Wireless Heatmaps
SAM (node-based licenses only)
  • AppInsight for Active Directory
  • AppInsight for IIS
  • AppInsight for SQL
  • AppInsight for Exchange
  • Most application monitor templates
  • Asset inventory
  • Hardware Health

ORC Scalability limits

  • Maximum of 100 Remote Collectors per polling engine

  • Maximum 1000 elements per Remote Collector

  • Maximum 40,000 elements supported per polling engine by all Orion Remote Collectors.

Upgrade/migration details

  • Upgrades occur over the same single port the Orion Remote Collector Uses to Communicate to the Orion Server
  • Plugins are deployed automatically to Orion Remote Collector as new Orion Platform products are installed
  • When you upgrade the main Orion Server, ORCs are upgraded automatically
  • You can move nodes between Orion Remote Collector and/or APEs

How to deploy?

  1. Deploy an Orion Agent using the agent-initiated communication, for example using the Add node wizard or manually.

    • Agents deployed on Additional Polling Engines or the Main Orion server cannot be used as Orion Remote Collectors.
    • ORCs require that the Agent uses agent-initiated communication.
  2. Promote the node to a Remote Collector:

    1. In the Orion Web Console, click Settings > All Settings > Manage Agents.

    2. Select the node hosting the future ORC and click More Actions > Promote Agent to Remote Collector.

      Promoting an agent to an ORC deploys new agent plugins on the node that enable the agent to poll other devices.

      To see a list of ORCs deployed on your polling engine, click Settings > All Settings > Orion Remote Collectors, or click the Orion Remote Collectors tab on the Manage Agents page.

  3. Specify nodes you want to poll with the Orion Remote Collector:

    • When discovering nodes on your network with the Network Sonar Wizard, select the Remote Collector in The Scan the network using the drop-down.


    • When adding nodes for monitoring with the Add Node wizard, select the polling method and select the Remote Collector in the Polling Engine drop down.


    • To use the ORC for polling nodes that are already monitored with Orion Platform, go to the node details page, click Edit Node and specify the ORC in the Polling Engine drop-down.

Uninstall/Demote ORCs to Agents

You cannot change Orion Remote Collectors back to an Orion Agent. To make ORC from another Agent, uninstall the ORC/Agent from the original server and deploy an Agent to the new target server.

  1. Navigate to the Manage Nodes page and Remove or reassign all nodes assigned to the ORC.

  2. Navigate to Settings > Polling Engines page and remove the ORC polling engine.

    The Delete unused polling engine button appears only when there are no nodes assigned to the ORC.

  3. Navigate to the Manage Agents page and delete the ORC agent.

    Now you can promote another agent to Orion Remote Collector.

Stack licenses

If your polling engines have enough resources available, you can stack the licenses for some Orion Platform products, such as NPM. Stacking licenses enhances the polling capacity of your Main Polling Engine or APE. A stack requires only one IP address, regardless of the number of APEs.

If the resources on your polling engine are already constrained and you cannot allocate additional resources, consider installing an APE.

How to deploy?

  1. Check that all deployed products support this option.

    In some deployment types (for example, SAM with node-based licensing), APE licenses are stacked automatically.

  2. Make sure that your environment meets the requirements for Additional Polling Engines in Multi-module system guidelines.

  3. Assign multiple licenses to a polling engine with the web-based License Manager. The maximum number of licenses you can apply to a single server depends on the Orion Platform product.

    1. In the Orion Web Console, click Settings > All Settings > License Manager.

    2. Click Add/Upgrade License, enter the activation key and registration details, and click Activate.

      The activated license with activation details displays in the License Manager.

    3. In the License Manager, select the license, and click Assign.

    4. Select a polling engine, and click Assign.

      The license is stacked on the selected polling engine, and its polling capacity is extended.

Scalability Engine Guidelines by Product

The following sections provide guidance for using scalability engines to expand the capacity of your SolarWinds installation.

You can use WMI to poll a maximum of 2,100 nodes in your Orion Platform deployment.

Orion Platform products Other SolarWinds products

 

Dameware in Centralized Mode

Dameware Scalability Engine Guidelines

Scalability Options

150 concurrent Internet Sessions per Internet Proxy

5,000 Centralized users per Centralized Server

10,000 Hosts in Centralized Global Host list

5 MRC sessions per Console

Database Performance Analyzer (DPA)

DPA Scalability Engine Guidelines

Scalability Options

Less than 20 database instances monitored on a system with 1 CPU and 1 GB RAM

21 - 50 database instances monitored on a system with 2 CPU and 2 GB RAM

51 - 100 database instances monitored on a system with 4 CPU and 4 GB RAM

101 - 250 database instances monitored on a system with 4 CPU and 8 GB RAM

More than 250 database instances monitored through Central Server mode

See Link together separate DPA servers in the DPA Administrator Guide

Engineer's Toolset on the Web

Engineer's Toolset on the Web Scalability Engine Guidelines

Scalability Options

45 active tools per Engineer's Toolset on the Web instance

3 tools per user session

1 active tool per mobile session

10 nodes monitored at the same time per tool

48 interfaces monitored at the same time per tool

12 metrics rendered at same time per tool

Enterprise Operations Console (EOC)

EOC Scalability Engine Guidelines

Scalability Options

Starting with EOC 2.2, EOC is successfully tested with 100 SolarWinds Sites with a total of 1 million elements (nodes, interfaces, volumes, and so on).

WAN and/or Bandwidth Considerations

Minimal monitoring traffic is sent between the EOC server and any remote Orion servers or APEs.

Connectivity For redundancy, multiple EOC servers can be connected to the same SolarWinds Site.
Latency

SolarWinds recommends that latency between the EOC server and connected SolarWinds Sites be less than 100 ms (both ways). EOC can function at higsamher latencies, but performance might be affected. 

EOC was tested with up to 500 ms of latency and remained functional, but performance (specifically with reports) was affected.

IP Address Manager (IPAM)

IPAM Scalability Engine Guidelines

Scalability Options

3 million IPs per SolarWinds IPAM main polling engine

Additional 1 million IPs per APE

Log Analyzer (LA)

LA Scalability Engine Guidelines

Scalability Options

1000 events per second

3.6 million events per hour

Up to 90 million events per day

NetFlow Traffic Analyzer (NTA)

NTA Scalability Engine Guidelines

Remote Office Poller

Yes

Main Polling Engine Limits

50k FPS per polling engine

For more information, see Network Performance Monitor (NPM)

Scalability Options

Up to 300k FPS

For more information, see Network Performance Monitor (NPM)

WAN and/or Bandwidth Considerations

1.5% - 3% of total traffic seen by exporter

Other Considerations

See Flow environment best practices in the NTA Getting Started Guide.

Network Automation Manager (NAM)

NAM Scalability Engine Guidelines

Main Polling Engine Limits

40,000 elements at standard polling frequencies:

  • Node and interface up/down: 2 minutes/poll
  • Node statistics: 10 minutes/poll
  • Interface statistics: 9 minutes/poll

25 - 50 concurrent Orion Web Console users

To monitor more than ~1,000,000 elements, consider using SolarWinds Enterprise Operations Console.

When you are not using or evaluating the Orion Log Viewer, the following limits apply:

  • SNMP Traps: ~500 messages per second (~1.8 million messages/hr)
  • Syslog: 700 - 1,000 messages/second (2.5 - 3.6 million messages/hr)

When using the Orion Log Viewer, the limit is 1,000 events per second (syslogs and SNMP traps combined).

Scalability Options

Additional polling engines (APEs) are stacked automatically. You can poll up to 40,000 elements per server. See How is SolarWinds NPM licensed?

Starting with Orion Platform 2020.2, a maximum of 100 APEs per instance with up to 1,000,000 elements monitored per instance.

Starting with Orion Platform 2018.2, a maximum of 100 APEs per instance with up to 400,000 elements monitored per instance.

Starting with Orion Platform 2017.3 SP3, a maximum of 100 APEs per instance with up to 100,000 elements monitored per instance.

Stackable Polling Engines

If you are using the unified NAM license key (NAM 2019.4 and later), polling engines are stacked automatically. You can monitor up to 40,000 elements per server at standard polling frequencies. When you exceed 40,000 elements polled by a polling engine, polling intervals will be automatically extended.

If you are using a license key for each product in the bundle (NAM earlier than 2019.4), scalability limits for individual products apply.

 
Orion Remote Collector Yes  

WAN and/or Bandwidth Considerations

Minimal monitoring traffic is sent between the primary SolarWinds NPM server and any APEs that are connected over a WAN. Most traffic related to monitoring is between an APE and the SolarWinds Orion database.

Other Considerations

How much bandwidth does SolarWinds require for monitoring?

See Orion Server Hardware Requirements in the Orion Platform documentation.

Network Configuration Manager (NCM)

NCM Scalability Engine Guidelines

Remote Office Poller

Yes

Main Polling Engine Limits

~10K devices

Scalability Options

Each SolarWinds NCM instance can support up to 100 APEs. 

Starting with Orion Platform 2017.3 SP3, a maximum of 100 APEs per instance is supported.

Each APE can support ~10K devices. However, the number of devices in the entire environment (the primary engine + all APEs) cannot exceed ~30K. 

Examples:

  • The primary engine and two APEs could support 10K devices each, for a total of 30K devices.
  • The primary engine and 20 APEs could support around 1,400 devices each, but the combined total cannot exceed the 30K maximum. 

Integrated standalone mode

Network Performance Monitor (NPM)

NPM Scalability Engine Guidelines

Stackable Polling Engines

Up to four total polling engines may be installed on a single server, for example one Primary Polling Engine with one to three Additional Polling Engines, or four Additional Polling Engines on the same server.

By stacking a polling engine 4 times, you can poll up to 40,000 elements per server.

A stack requires only 1 IP address, regardless of the number of APEs.

Remote Office Poller

ROP250 supports 250 elements

ROP1000 supports 1000 elements

Main Polling Engine Limits

~12k elements at standard polling frequencies:

  • Node and interface up/down: 2 minutes/poll
  • Node statistics: 10 minutes/poll
  • Interface statistics: 9 minutes/poll

25 - 50 concurrent Orion Web Console users

To monitor more than ~1,000,000 elements, consider using SolarWinds Enterprise Operations Console.

When you are not using or evaluating the Orion Log Viewer, the following limits apply:

  • SNMP Traps: ~500 messages per second (~1.8 million messages/hr)
  • Syslog: 700 - 1,000 messages/second (2.5 - 3.6 million messages/hr)

When using the Orion Log Viewer, the limit is 1,000 events per second (syslogs and SNMP traps combined).

Scalability Options

One polling engine for every ~12,000 elements. See How is SolarWinds NPM licensed?

Starting with Orion Platform 2020.2, a maximum of 100 APEs per instance with up to 1,000,000 elements monitored per instance.

Starting with Orion Platform 2018.2, a maximum of 100 APEs per instance with up to 400,000 elements monitored per instance.

Starting with Orion Platform 2017.3 SP3, a maximum of 100 polling engines per instance with up to 100,000 elements monitored per instance.

WAN and/or Bandwidth Considerations

Minimal monitoring traffic is sent between the primary SolarWinds NPM server and any APEs that are connected over a WAN. Most traffic related to monitoring is between an APE and the SolarWinds Orion database.

NetPathTM Scalability

The scalability of NetPath™ depends on the complexity of the paths you are monitoring, and the interval at which you are monitoring them.

In most network environments:

  • You can add up to 100 paths per polling engine.
  • You can add 10 - 20 paths per probe.

    See NetPath requirements for more information.

Other Considerations

How much bandwidth does SolarWinds require for monitoring?

See Orion Server Hardware Requirements in the Orion Platform documentation.

Orion Agents

Orion Agents Scalability Engine Guidelines

Scalability Options

1,000 agents per polling server

Patch Manager

Patch Manager Scalability Engine Guidelines

Scalability Options

1,000 nodes per automation server

1,000 nodes per SQL Server Express instance (SQL Server does not have this limitation)

SQL Express is limited to 10 GB storage. For large deployments, SolarWinds recommends using remote SQL.

Quality of Experience (QoE)

QoE Scalability Engine Guidelines

Scalability Options

  • Maximum throughput (NPAS and SPAS): 1 Gbps
  • Maximum number of nodes per sensor (NPAS): 50 nodes
  • Maximum number of node and application pairs
    (NPAS and SPAS): 50,000 pairs
  • Maximum number of sensors deployed on your network: 1,000 sensors
  • Maximum number of applications per node or sensor
    (NPAS and SPAS): 1,000 applications per node

Security Event Manager (SEM)

SEM Scalability Engine Guidelines

Scalability Options

Up to 216 million events per day (2,500 events per second)

5,000 rule hits per day

Server & Application Monitor (SAM)

SAM Scalability Engine Guidelines

Stackable polling engines

If using the unified SAM license key (SAM 2020.2 and later) with node-based licensing, polling engines are stacked automatically. You can monitor up to 40,000 component monitors per server at standard polling frequencies. When you exceed that limit, polling intervals are automatically extended.

With component-based licensing, 2 polling engines can be installed on a single server. Stacking is supported.

For details, see the SAM licensing model.

 
Remote Office Poller Yes  

Main Polling Engine limits

With node-based licensing, 40,000 component monitors per polling engine at standard polling frequencies.

With component-based licensing, ~8-10K component monitors per polling engine

25 — 50 concurrent Orion Web Console users

Scalability options

For SAM 2020.2 and later, with node-based licensing:

  • 1 Main Polling Engine (up to 10K component monitors) and up to 100 APEs.
  • 1 APE for every ~40K component monitors at no extra licensing cost. APE licenses are stacked automatically; scalability is built-in with node-based licensing.
  • Up to 550K component monitors.

For SAM 2020.2 and later, with component-based licensing:

  • 1 polling engine for every 10K monitors and up to 100 APEs. Stacking is supported.
  • Up to 550K component monitors.

For SAM 2019.4.x and earlier:

  • 1 polling engine for every ~8-10K component monitors with a maximum of 150K component monitors per primary SAM installation (1 Main Polling Engine and 14 APEs).

For tips on maximizing your polling capacity, see SAM polling recommendations.

To extend beyond these component monitor capacities and surface additional monitored data in a single pane of glass, consider purchasing the SolarWinds Enterprise Operations Console.

Orion Remote Collector Supported in SAM 2020.2.1 or later, with node-based licensing only.  

WAN and/or bandwidth considerations

Minimal monitoring traffic is sent between the Orion server and any APEs connected over a WAN. Most traffic related to monitoring is between an APE and the Orion database server.

Bandwidth requirements depend on the size of the relevant component monitor. Based on 67.5 KB / WMI poll and a 5-minute polling frequency, the estimate is 1.2 Mbps for 700 component monitors.

Server Configuration Monitor (SCM)

SCM Scalability Engine Guidelines

Scalability Options

An Orion instance with SCM installed can process up to 280 changes/second combined.

If you expect to have more than 1,000 agents per poller, you will need an APE. You can add 1 APE at no extra licensing cost.

Serv-U FTP Server and MFT Server

Serv-U FTP Server and MFT Server Scalability Engine Guidelines

Scalability Options

500 simultaneous FTP and HTTP transfers per Serv-U instance

50 simultaneous SFTP and HTTPS transfers per Serv-U instance

For more information, see the Serv-U Distributed Architecture Guide.

Storage Resource Monitor (SRM)

SRM Scalability Engine Guidelines

Stackable Polling Engines

No, one APE instance can be deployed on a single host

Remote Office Poller

Yes

Poller remotability is a feature enabling the local storage, using MSMQ, of up to ~1 GB of polled data per poller in case the connection between the polling engine and the database is temporarily lost.

Main Polling Engine Limits

Maximum of 40K LUNs per polling engine (primary or additional)

25 - 50 concurrent Orion Web Console users

Scalability Options

Use APEs for horizontal scaling

The upper limit that can be handled by a single SRM instance is 160K LUNs. For larger environments, please contact SolarWinds for further assistance.

WAN and/or Bandwidth Considerations

Minimal monitoring traffic is sent between the primary SRM server and any APEs that are connected over a WAN. Most traffic related to monitoring is between an APE and the SolarWinds database.

User Device Tracker (UDT)

UDT Scalability Engine Guidelines

Remote Office Poller

Yes

Main Polling Engine Limits

100k ports

Scalability Options

1 APE per 100k additional ports

Maximum of 500k port per instance (1 Primary Polling Engine and 4 APEs)

WAN and/or Bandwidth Considerations

None

Other Considerations

UDT version 3.1 supports the ability to schedule port discovery

In UDT version 3.1 the Max Discovery Size is 2,500 nodes/150,000 ports

Virtualization Manager (VMAN)

VMAN Scalability Engine Guidelines

Scalability Options

1 APE per 10000 monitored virtual machines

For the VMware events feature, VMAN supports a flat amount of 1000 VMware events per second, regardless of deployment size and number of APEs. Important: VMware events utilize the Orion Log Viewer and count toward Orion Log Viewer/Log Analyzer's 1000 EPS limit.

Main Polling Engine system requirements

The main polling engine should be upgraded to meet greater polling demands as the virtual environment increases in size. See the VMAN Deployment Sizing Guide.

Deployment Sizing Guide For VMAN-specific sizing and scaling guidelines, see the VMAN Deployment Sizing Guide.

VoIP & Network Quality Manager (VNQM)

VNQM Scalability Engine Guidelines

Remote Office Poller

Yes

Primary Polling Engine Limits

~5,000 IP SLA operations

~200k calls/day with 20k calls/hour spike capacity

Scalability Options

1 APE per 5,000 IP SLA operations and 200,000 calls per day

Maximum of 15,000 IP SLA operations and 200,000 calls per day per SolarWinds VNQM instance (SolarWinds VNQM + 2 VNQM APEs)

WAN and/or Bandwidth Considerations

Between Call Manager and VNQM: 34 Kbps per call, based on estimates of ~256 bytes per CDR and CMR and based on 20k calls per hour

Web Help Desk (WHD)

WHD Scalability Engine Guidelines

Deployments with fewer than 20 techs

You can run Web Help Desk on a system with:

  • A supported 32-bit operating system
  • A 32-bit Java Virtual Machine (JVM)
  • 4GB RAM (up to 3.7GB for the tech sessions, JVM support, operating system, and any additional services you need to run on the system)

This configuration supports 10 - 20 tech sessions with no onboard memory issues.

To adjust the maximum memory setting, edit the MAXIMUM_MEMORY option in the WebHelpDesk/conf/whd.conf file.

Deployments with more than 20 techs

If your deployment will support more than 20 tech sessions, SolarWinds recommends installing Web Help Desk on a system running:

  • A supported 64-bit operating system
  • A 64-bit JVM
  • 3GB RAM for 20 tech sessions plus 1GB RAM for each additional 10 tech sessions

To enable the 64-bit JVM, add the following argument to the JAVA_OPTS option in the /library/WebHelpDesk/conf/whd.conf file:

JAVA_OPTS="-d64"

To increase the max heap memory on a 64-bit JVM, edit the MAXIMUM_MEMORY option in the WebHelpDesk/conf/whd.conf file.

For other operating systems, install your own 64-bit JVM and then update the JAVA_HOME option in the WebHelpDesk/conf/whd.conf file to point to your Java installation.

Web Performance Monitor (WPM)

WPM Scalability Engine Guidelines

Remote Office Poller

Not directly supported, but recordings may be made from multiple locations.

Main Polling Engine limits

12 monitored transactions per WPM Player. See the next row for details.

Scalability options

SolarWinds recommends one transaction location per 12 monitored transactions.

You can use the Player Load Percentage widget to estimate the number of transactions assigned to a machine that hosts a WPM Player. Many factors are involved, including:

  • The complexity of assigned transactions.
  • The length of playback for each transaction.
  • The length of intervals between each transaction playback.
  • The processor speed and RAM available on the machine hosting the WPM Player.
  • The amount of SEUM-User or domain accounts involved in playback. See How WPM works and Manage SEUM-User accounts.

If you notice a high load percentage, consider increasing the time intervals between polls and/or adding more players to a given location to distribute loads more evenly. See Player Load Percentage in THWACK to learn more.

Frequently Asked Questions

Does each module have its own polling engine?

A standard, licensed APE may have all relevant modules installed on it, and it performs polling for all installed modules. An APE works the same way as your Main Polling Engine on your main server. For example, if you have NPM and SAM with component-based licensing installed, install one APE and it performs polling for both NPM and SAM.

For products that do not require an extra license for an APE, including LA, SAM (node-based licensing only), SCM, SRM, and VMAN, the APE only polls data for the product they're included with, along with Orion Platform data. For example, a SAM APE returns SAM data and basic node metrics provided by the Orion Platform such as status and volume, but not NPM-specific data such as interfaces.

Are polling limits cumulative or independent? For example, can a single polling engine poll 12k NPM elements AND 10k SAM monitors together?

Yes, a single polling engine can poll up to the limits of each module installed, if sufficient hardware resources are available.

Are there different size license available for the Additional Polling Engine?

No, the APE is available with an unlimited license.

Can you add an Additional Polling Engine to any size module license?

Yes, you can add an APE to any size license.

Adding an APE does not increase your license size. For example, if you are licensed for an NPM SL100, adding an APE does not increase the licensed limit of 100 nodes/interfaces/volumes, but the polling load is spread across two polling engines.

What happens if the connection from a polling engine to the Orion Database Server is lost?

If there is a connection outage to the Orion Database Server, polling engines use Microsoft Message Queuing (MSMQ) to cache the polled data on the APE servers.

The amount of data that can be cached depends on the disk space available on the polling engine server. The default storage space is 1 GB. Up to one hour of data can be cached.

When the connection to the database is restored, the Orion Database Server is updated with the locally cached data, the oldest data is processed first.

If the database connection is broken for a longer than an hour, the collector queue becomes full, the newest data is discarded until a connection to the database is re-established.