Documentation forAppOptics

Logs Collector/Forwarder

Overview

A logs suite plugin is included with the SolarWinds Snap Agent. The logs suite is a set of collectors and publishers that run as part of the main swisnapd agent process. Log publishers included in the logs suite can be used to send logs to Loggly and Papertrail using various endpoints that those services provide. It provides a very broad scope of use cases such as:

  • Collecting logs from servers and daemons that do not natively support syslog
  • When reconfiguring the system logger is less convenient than a purpose-built daemon (e.g., automated app deployments)
  • Aggregating files not generated by daemons (e.g., package manager logs)
  • Collecting logs from docker containers
  • Monitoring Kubernetes cluster events

Features

The logs suite supports the following log sources:

  • Log files
  • Windows events
  • Docker container logs
  • Kubernetes events
  • Syslog server

The logs suite supports the following log destinations:

  • Loggly HTTPs bulk endpoint (default)
  • Loggly HTTPs endpoint
  • Loggly Syslog endpoint
  • Papertrail HTTPs bulk endpoint
  • Papertrail HTTPs endpoint
  • Papertrail Syslog endpoint

Prerequisites

This plugin requires an active Loggly or Papertrail account.

Quick Setup

To enable the logs suite with the basic configuration that detects standard logs on the system, include the --detect-logs option during installation (see Installation - Linux or Installation - Windows).

To further customize your configuration to collect logs from specific sources and, optionally, tag or filter the logs, set up log collectors. See Configuring a Logs Collector for setup instructions.

About Log Collectors

Log collectors are set up using .yaml task files. Multiple collectors can be used to collect logs from various sources and configured in multiple ways, including tagging logs based on their content.

The following log collectors are available:

Task files

You can create as many task files for collectors as you need. This allows you to configure your agent to start additional syslog servers, set up log file monitoring individually for multiple apps, or specify a different publishers for different monitoring needs. When you set up multiple task files, the Snap Agent starts each task file as a separate task. If one task fails, other tasks are not affected.

You can optionally overwrite any publisher settings in tasks.

The Snap Agent includes sample task files for each collector type to make it easier to configure your data collection. Locate the sample task file named xyz.yaml.example, make a copy of the file, and rename the file with a .yaml file extension (ex: xyz.yaml). Once you have configured your task files, restart the agent:

  • On Windows, use the command prompt or PowerShell to run the following commands:

    copy "C:\ProgramData\SolarWinds\Snap\tasks-autoload.d\task-logs-docker.yaml.example" "C:\ProgramData\SolarWinds\Snap\tasks-autoload.d\task-logs-docker.yaml"
    net stop swisnapd
    net start swisnapd
  • On Linux, use command line to run the following commands:

    sudo cp /opt/SolarWinds/Snap/etc/tasks-autoload.d/task-logs-docker.yaml.example /opt/SolarWinds/Snap/etc/tasks-autoload.d/task-logs-docker.yaml
    sudo service swisnapd restart

Tagging in task files

Every collector allows you to set custom tags for specific logs. Depending on your setup and preferences, you can add as many tags as you wish and apply them selectively depending on your needs. The tagging section used in file collector looks like this:

#tags:
#  |log-files|[file=/tmp/application.log]|string_line:
#    sometag: somevalue

For logs sent to Loggly, tags are automatically detected and shown. Papertrail does not natively support tagging, however you can configure the publisher to prepend each log line with tags so they're visible in Papertrail.

Multiline

The Files collector and the Docker collector can collect and forward multiline logs, such as Stack traces.

To enable multiline collection for the Docker collector, uncomment both the multiline section and the logs section with one empty filter or use the user defined filters:

# multiline:
      #  match: "nonfirst"
      #  pattern: "^\\s"
      #  separator: "\n"
      #  flush_after: "500ms"

# logs:
# - filters:

To enable multiline collection for the Files collector, uncomment the multiline section:

# multiline:
      #  match: "nonfirst"
      #  pattern: "^\\s"
      #  separator: "\n"
      #  flush_after: "500ms"

Configuration

SolarWinds recommends creating a new task to enable the collection of multiline logs. Configuration options allow you to customize log content.

  • match: Set the lines the regex pattern should match . The default setting is nonfirst. Valid values include:
    • first – match first line only and append following messages until another line matches.
    • last – concatenate all messages until the pattern matches the next line.
    • nonlast – match a line, append upcoming matching lines; append first non-matching line and start matching a line from the beginning
    • nonfirst – append all matching lines to first line and begin again with the next non-matching line.
  • pattern: Set a regex pattern for multiline logging compatible with golang syntax. The default setting is: “^\\s”. This setting will match a line with whitespace at its beginning.
  • separator: Set a separator between lines for output. Change the default setting (“\n”) to “\\n”. The log suite supports the following publishers:
    • loggly-http-bulk
    • loggly-syslog
    • papertrail-syslog
    • swi-logs-http-bulk
  • flush_after: Set maximum time between the first and last lines of a multiline log entry. The default setting is: "500ms".

Configuring a Logs Collector

Files

The Files collector allows you to monitor plain text files for new log entries. It is useful for monitoring application logs and standard system logs (ex: /var/log/syslog in Linux). Several snap agent integrations task files contain predefined log sections that can be uncommented to enable Files collector functionality for the integration.

This collector supports multiline logs. See Multiline for more information.

Configuration

The sample task file for the Files collector is located in /opt/SolarWinds/Snap/etc/tasks-autoload.d/task-logs-files.yaml.example on Linux, and C:\ProgramData\SolarWinds\Snap\tasks-autoload.d\task-logs-files.yaml.example on Windows. Configuration options are explained in comments in the sample file.

---
version: 2

schedule:
  type: streaming

plugins:
  - plugin_name: log-files

    config:
      ## An interval for looking for new files matching given pattern(s)
      #new_file_check_interval: 30s

      ## An array of files or filename patterns to watch.
      ##
      ## NOTE: Be careful when attempting to handle snapteld logs
      ## as those might also contain log entries of logs collector
      ## to avoid infinite recurrence effect you should apply exclude pattern below by adding
      ## ".*self-skip-logs-collector.*"
      file_paths:
        - /var/log/syslog
        - /var/log/messages
      #  - /var/log/*.log
      #  - /var/log/httpd/access_log

      ## Provide one or more regular expressions to prevent certain files from being matched.
      #exclude_files_patterns:
      #  - \.\d$
      #  - \.bz2
      #  - \.gz

      ## There may be certain log messages that you do not want to be sent.
      ## These may be repetitive log lines that are "noise" that you might
      ## not be able to filter out easily from the respective application.
      ## To filter these lines, use exclude_patterns with an array or regexes.
      #exclude_lines_patterns:
      #  - exclude this
      #  - \d+ things

      ## Enable collecting multiline logs from files
      #multiline:

      ## Set which lines the pattern should match, one of first|last|nonfirst|nonlast.
      ## Defaults to: "nonfirst"
      #  match: "nonfirst"

      ## Set a list of regex patterns for multiline logging. Defaults to list with one regex pattern: "^\\s"
      #  patterns:
      #   - "^\\s"

      ## Example for collecting C stack trace:
      #  match: "nonfirst"
      #  patterns:
      #   - "^(./|/)"

      ## Example for collecting C stack trace with core dump:
      #  match: "nonfirst"
      #  patterns:
      #   - "^(Program|\[.|#\\d)"

      ## Example for collecting C++ stack trace:
      #  match: "nonlast"
      #  patterns:
      #   - "^(Stack trace|#\\d)"

      ## Example for collecting Golang stack trace:
      #  match: "nonfirst"
      #  patterns:
      #   - "^(goroutine|main.|runtime|\\s|$)"

      ## Example for collecting Python stack trace:
      #  match: "nonlast"
      #  patterns:
      #   - "^(Traceback|\\s)"

      ## Example for collecting Java stack trace:
      #  match: "nonlast"
      #  patterns:
      #   - "^(java.lang|\\s)"

      ## Set a separator between lines for output. Defaults to: "\n"
      #  separator: "\n"
      ## For publishers like: loggly-http-bulk, loggly-syslog, papertrail-syslog, swi-logs-http-bulk
      ## use other separator:
      #  separator: "\\n"

      ## Set maximum time between the first and last lines of a multiline log entry.
      ## Defaults to: "500ms"
      #  flush_after: "500ms"

    #metrics:
    #  - |log-files|[file]|string_line

    #tags:
    #  "|log-files|[file=/tmp/application.log]|string_line":
    #    sometag: somevalue

    publish:
      - plugin_name: loggly-http-bulk

Default tags

Every collected log line is by default tagged with:

  • source – name of file

Windows events

The Windows events collector is available only for Windows installations and allows you to easily monitor Windows events on your host, enabling auditing of your hosts or of the applications running on the host. You can configure the Windows event collector to filter events, allowing you to only collect events that you are interested in.

Configuration

The sample task file for the Windows events collector is located in C:\ProgramData\SolarWinds\Snap\tasks-autoload.d\task-logs-win-events.yaml.example. Configuration options are explained in comments in the sample file.

--- 
version: 2 
 
schedule: 
  type: streaming 
 
plugins: 
  - plugin_name: win-events 
 
    config: 
 
      ## Filters enumerates channels that are to be observed. 
      ## Each channel can provide independent filters describing which messages should be collected. 
      ## Currently following fields are supported: 
      ##  - Level, level of event (e.g., Error, Warning, Information, Success Audit, Failure Audit) 
      ##  - EventId, event identifier 
      ##  - Source, application that triggered event (e.g., VSS, Winlogon) 
      ##  - Computer, computer on which event was triggered 
      ##  - User, user who triggered event 
      ##  - Message, message associated with the event 
      ## For each field, either a single value or a list of possible values can be provided.  
      ## Field names and value(s) are case sensitive. 
      ## 
      ## There are also special matchers: 
      ##  - range, allows to specify range of numbers instead of listing them one by one, 
      ##  - contains, checks whether a string contains a specific word 
      ##  - matches, checks whether string matches a regular expression 
      ## range should be used only with EventId field, contains and matches are the most usable with Message fields,  
      ## although might be used as a matcher with any other fields (requiring string argument).  
      filters: 
      - channel: Application 
        level: Error 
      - channel: System 
        level: Error 
      - channel: Security 
        level: Error 
      #- channel: Application 
      #  level: 
      #  - Error 
      #  - Warning 
      #  event_id: 
      #  - 50 
      #  - range: 
      #      min: 55 
      #      max: 60 
      #  source: 
      #  - AppOptics 
      #  - Snapteld 
      #  computer: host.domain 
      #  user: windows-user 
      #  message: 
      #  - matches: event[0-9]{2,3} 
      #  - contains: message 
 
    #metrics: 
    #  - /win-events/[channel]/[level]/[source]/string_line 
 
    #tags: 
    #  /win-events/[channel=Application]/[level=Error]/[source=AppOptics]: 
    #    sometag: somevalue 
 
    publish: 
      - plugin_name: loggly-http-bulk 

Default tags

Every collected log line is by default tagged with:

  • source – win-events

Docker logs

The Docker logs collector monitors logs produced by your containers. It can be used in both your standalone docker setups and your Kubernetes cluster to monitor specific containers. There are various scenarios for Kubernetes. The agent can be deployed in a DaemonSet or SideCar container. You can configure the Docker collector to filter items in a method similar to docker log commands, allowing you to only collect from containers that you are interested in.

This collector supports multiline logs. See Multiline for more information.

Configuration

The sample task file for the Docker logs collector is located in /opt/SolarWinds/Snap/etc/tasks-autoload.d/task-logs-docker.yaml.example on Linux, and C:\ProgramData\SolarWinds\Snap\tasks-autoload.d\task-logs-docker.yaml.example on Windows. Configuration options are explained in comments in the sample file.

---
version: 2

schedule:
  type: streaming

plugins:
  - plugin_name: docker-logs

    config:

      ## Set a docker service endpoint. Defaults to: unix:///var/run/docker.sock
      endpoint: unix:///var/run/docker.sock

      ## Set maximum time for calling docker api. Defaults to: 15s
      timeout: 15s

      ## Set a time after that engine of collection logs is restarted when docker service is not available.
      ## Defaults to: 60s
      retry_interval: 60s

      ## Enable collecting multiline logs from containers
      #multiline:

      ## Set which lines the pattern should match, one of first|last|nonfirst|nonlast.
      ## Defaults to: "nonfirst"
      #  match: "nonfirst"

      ## Set a list of regex patterns for multiline logging. Defaults to list with one regex pattern: "^\\s"
      #  patterns:
      #   - "^\\s"

      ## Example for collecting C stack trace:
      #  match: "nonfirst"
      #  patterns:
      #   - "^(./|/)"

      ## Example for collecting C stack trace with core dump:
      #  match: "nonfirst"
      #  patterns:
      #   - "^(Program|\[.|#\\d)"

      ## Example for collecting C++ stack trace:
      #  match: "nonlast"
      #  patterns:
      #   - "^(Stack trace|#\\d)"

      ## Example for collecting Golang stack trace:
      #  match: "nonfirst"
      #  patterns:
      #   - "^(goroutine|main.|runtime|\\s|$)"

      ## Example for collecting Python stack trace:
      #  match: "nonlast"
      #  patterns:
      #   - "^(Traceback|\\s)"

      ## Example for collecting Java stack trace:
      #  match: "nonlast"
      #  patterns:
      #   - "^(java.lang|\\s)"

      ## Set a separator between lines for output. Defaults to: "\n"
      #  separator: "\n"
      ## For publishers like: loggly-http-bulk, loggly-syslog, papertrail-syslog, swi-logs-http-bulk
      ## use other separator:
      #  separator: "\\n"

      ## Set maximum time between the first and last lines of a multiline log entry.
      ## Defaults to: "500ms"
      #  flush_after: "500ms"

      ## Set the list of filters.
      logs:

      ## Set filters for getting interesting containers.
      ## More can be found here: https://docs.docker.com/engine/api/v1.39/#operation/ContainerList
      - filters:
          name:
            nginx: true

        ## Set options for container logs.
        ## More can be found here: https://docs.docker.com/engine/api/v1.39/#operation/ContainerLogs
        options:
          showstdout: true
          showstderr: true
          since: ''
          follow: true
          tail: all

        ## Allow reading logs from a container started with options -t.
        allow_tty: false

        ## Allow to ignore container by setting container label.
        ## Second option for ignoring container is to set for container environment variable:
        ## Example: -e "LOGS=ignore" or -e "LOGS=IGNORE"
        exclude_variables:
        - not-log

    #metrics:
    #  - /docker-logs/[container]/string_line

    #tags:
    #  /docker-logs/[container=my_container]/string_line:
    #    sometag: somevalue

    publish:
      - plugin_name: loggly-http-bulk

Default tags

Every collected log line is by default tagged with:

  • source – “docker-logs”
  • cID - container ID
  • cName - container name

Kubernetes events

The Kubernetes collector monitors events in a Kubernetes cluster. You can configure the collector to filter by namespaces or field selectors, similar to kubectl get events command. You can use this collector in your Kubernetes cluster using a deployment or as a standalone agent.

Large Kubernetes clusters can produce a high volume of events, so SolarWinds recommends tuning your selections. If you need to scale the collection, add more deployments with proper filtering options.

Configuration

The sample task file for the Kubernetes collector is located in /opt/SolarWinds/Snap/etc/tasks-autoload.d/task-logs-k8s-events.yaml.example on Linux, and C:\ProgramData\SolarWinds\Snap\tasks-autoload.d\task-logs-k8s-events.yaml.example on Windows.

To retrieve logs from the swisnap agent pod, use one of the following commands:

kubernetes logs -n kube-system swisnap-agent-<name>
docker logs <docker-container-name>

Configuration options are explained in comments in the sample file.

--- 
version: 2 
 
schedule: 
  type: streaming 
 
plugins: 
  - plugin_name: k8s-events 
 
    config: 
      ## Configure k8s events collection: 
      ## - "incluster" (defaults to false) defines if collector is running next to the k8s cluster (kubeconfigpath must be provided) or inside container. 
      ## - "kubeconfigpath" (defaults to for unix "~/.kube/config", for windows "%USERPROFILE%/.kube/config") defines paths to the k8s configuration if collector is running next to the k8s cluster. 
      ## - "filters" defines filters which will be used to get interesting events. 
      ##    "watch_only" (defaults to true) defines if collector should watch only new events, without listing/getting stored ones. 
      ##    Namespace default and type normal are default values for the filter. 
      ##    Please take in mind, that collecting all events from kubernetes cluster can generate significant load, so tune your selections. 
      ##    More about filter fields can be found here: https://documentation.solarwinds.com/en/Success_Center/appoptics/content/kb/host_infrastructure/integrations/kubernetes.htm#configuration 
 
      #incluster: false 
      #kubeconfigpath: "~/.kube/config" 
 
      #filters: 
      #- namespace: default 
      #  watch_only: true 
      #  options: 
      #    fieldSelector: "type==Normal" 
      #- namespace: kube-system 
      #  watch_only: true 
      #  options: 
      #    fieldSelector: "type==Warning" 
 
    #tags: 
    #  /k8s-events/[namespace=my_namespace]/string_line: 
    #    sometag: somevalue 
 
    publish: 
      - plugin_name: loggly-http-bulk 

Default tags

Every collected log line is by default tagged with:

  • source – “k8s-events”

Syslog server

The syslog server collector starts a syslog server on your host for collecting syslog messages that are sent by devices in your infrastructure, e.g. firewalls. It supports various communication and filtering options. If you would like to start two syslog servers using TCP and UDP protocols, or different ports, just create another task file. By default, the syslog server is started on non-privileged UDP port 4514. If you want to run the server on standard low ports, you need to enable the appropriate capabilities for swisnapd on Linux. With new installations, the installer gives you the option to allow the swisnapd binary to bind privileged ports.

Configuration

The sample task file for the syslog server collector is located in /opt/SolarWinds/Snap/etc/tasks-autoload.d/task-logs-syslog.yaml.example on Linux, and C:\ProgramData\SolarWinds\Snap\tasks-autoload.d\task-logs-syslog.yaml.example on Windows. Configuration options are explained in comments in the sample file.

--- 
version: 2 
 
schedule: 
  type: streaming 
 
plugins: 
  - plugin_name: syslog 
 
    config: 
      server: 
        ## A protocol to use when connecting to syslog server 
        ## One of: udp (default), tcp, unix 
        protocol: "udp" 
 
        ## Address to listen on 
        host: "127.0.0.1" 
 
        ## Port to listen on (defaults to 4514) 
        ## NOTE: For listening on ports lower than 1024 you might need to set up proper permissions 
        #port: 4514 
 
        ## Connection timeout (defaults to 0) 
        #timeout: 0 
 
        ## Message buffer size sets the internal queue lengths for received messages (defaults to 1024) 
        #message_buffer_size: 1024 
 
        ## UDP socket read buffer size (defaults to 10MB) 
        #udp_read_buffer_size: 10485760 
 
      ## Optional TLS settings to be used with TCP protocol 
      #tls: 
        #cert_path: "/path/to/server.pem" 
        #key_path: "/path/to/server.key" 
        #ca_path: "/path/to/root/ca.pem" 
        #insecure: false 
 
      ## Additional syslog server features 
      #syslog: 
        ## Format of messages to use for parsing 
        ## One of: RFC3164, RFC5424 (default), RFC6587, auto 
        #syslog_format: RFC5424 
         
        ## Set to numerical equivalent if you wish to filter messages by priority 
        ## 0 (default) means no filtering 
        #min_priority: 0 
 
        ## Set to numerical equivalent if you wish to filter messages by severity 
        ## 0 (default) means no filtering 
        #min_severity: 0 
 
    #metrics: 
    #  - /syslog/[ip_address]/[hostname]/string_line 
 
    #tags: 
    #  /syslog/[ip_address=127.0.0.1]/[hostname=server]/string_line: 
    #    sometag: somevalue 
 
    publish: 
      - plugin_name: loggly-http-bulk

Default tags

Every collected log line is by default tagged with:

  • source – “syslog”
  • ip_address – IP address from which syslog message was received
  • hostname – Hostname from syslog message

Publishers

There are several publishers available in the agent so you can send logs to Loggly and Papertrailin a way that best fits your needs. Each collector is compatible with all publishing methods. The default publisher used in the sample task files is loggly-http-bulk.

All publishers’ configurations are combined in a single configuration file: /opt/SolarWinds/Snap/etc/plugins.d/publisher-logs.yaml on Linux and C:\ProgramData\SolarWinds\Snap\plugins.d\publisher.logs.yaml. During the agent installation, the configuration file is populated with the token you provide. You can change the token if you need the publisher to use a different Loggly account.

This is the main template for the publishers configuration, which can be overwritten in tasks if needed. The agent uses loggly-http-bulk publisher by default.

    ...     
    #metrics: 
    #  - /syslog/[ip_address]/[hostname]/string_line 
 
    #tags: 
    #  /syslog/[ip_address=127.0.0.1]/[hostname=server]/string_line: 
    #    sometag: somevalue 
 
    publish: 
      - plugin_name: loggly-http-bulk 
        config: 
          token: <OTHER TOKEN> 
          bulk_size: 500 

The common –bulk postfix in the publisher names indicates that the publisher operates on batches of consecutive messages, instead of sending every message as soon as it is received. In most cases, the bulk version is more efficient.

Loggly HTTP

The Loggly HTTP publisher sends logs to Loggly using its HTTPs Event Endpoint. This publisher supports disk caching for log messages if they cannot be sent out to the service.

v2: 
  publisher: 
    loggly-http: 
      all: 
        ## Loggly logs endpoint (defaults to https://logs-01.loggly.com/inputs/) 
        #endpoint: https://logs-01.loggly.com/inputs/ 
 
        ## Timeout for sending data to the endpoint when the data buffer is not full (defaults to 10s) 
        #timeout: "10s" 
 
        ## Token for authorization with logs ingestion endpoint 
        ## Should match the API token generated in Loggly application settings for your account 
        token: SOLARWINDS_TOKEN 
 
        ## Content type specifies if data should be send as json or text. Default is text. 
        #content_type: "text" 
 
        ## Control cache behavior. When commented out cache will be disabled. 
        #cache: 
        # <cache options here> 

Loggly HTTP bulk (default)

The Loggly HTTP bulk publisher sends logs to Loggly using its HTTPs Bulk Event Endpoint. This publisher supports disk caching for log messages if they cannot be sent out to the service.

v2: 
  publisher: 
    loggly-http-bulk: 
      all: 
        ## Loggly bulk logs endpoint (defaults to https://logs-01.loggly.com/bulk/) 
        #endpoint: https://logs-01.loggly.com/bulk/ 
 
        ## Timeout for sending data to the endpoint when the data buffer is not full (defaults to 10s) 
        #timeout: "10s" 
 
        ## Token for authorization with logs ingestion endpoint 
        ## Should match the API token generated in Loggly application settings for your account 
        token: SOLARWINDS_TOKEN 
 
        ## Max number of messages in bulk. By default 1000. 
        #bulk_size: 1000 
 
        ## Max time publisher will wait before sending next bulk. By default 10s. 
        #max_bulk_time: "10s" 
 
        ## Control cache behavior. When commented out cache will be disabled. 
        #cache: 
        # <cache options here> 

Loggly syslog

The Loggly syslog publisher sends logs to Loggly using its Syslog Endpoint.

v2: 
  publisher: 
    loggly-syslog: 
      all: 
        ## Loggly API token and host 
        token: SOLARWINDS_TOKEN 
        host: "logs-01.loggly.com" 
 
        ## Loggly API port and protocol 
        ## use 6514 with TLS or 514 with TCP 
        port: 6514 
        protocol: tls 
 
        ## Override the hostname used for logs reported by this agent. Defaults to the OS-provided hostname. 
        #hostname: "myhost" 
 
        ## Path to Loggly public CA certificate. See https://www.loggly.com/docs/rsyslog-tls-configuration/ for reference. 
        ## Uncomment this line if you want to use custom host certificate store. 
        #ca_certificate_path: /path/to/your/certificate

Papertrail HTTP

The Papertrail HTTP publisher sends logs to Papertrail using its HTTPs Event Endpoint. This publisher supports disk caching for log messages if they cannot be sent out to the service.

v2: 
  publisher: 
    swi-logs-http: 
      all: 
        ## SWI logs endpoint (defaults to https://logs.collector.solarwinds.com/v1/logs) 
        #endpoint: https://logs.collector.solarwinds.com/v1/logs 
 
        ## Timeout for sending data to the endpoint when the data buffer is not full (defaults to 10s) 
        #timeout: "10s" 
 
        ## Token for authorization with logs ingestion endpoint 
        ## Should match the API token generated in Loggly or Papertrail application settings for your account 
        token: SOLARWINDS_TOKEN 
 
        ## Should tags be added to message. Default is false.  
        #prefix_with_tags: false 
 
        ## Content type specifies if data should be send as json or text. Default is text. 
        ## If content_type is json and prefix_with_tags is set, each message will be wrapped in following json: 
        ## {"timestamp": "2020-11-20T09:10:00Z", "tags": { "tag1": "value1", "tag2": "value2" }, "message": <your json message> } 
        ## Message is not checked if it is valid json.  
        #content_type: "text" 
 
        ## Control cache behavior. When commented out cache will be disabled. 
        #cache: 
        # <cache options here>

Papertrail HTTP bulk

The Papertrail HTTP bulk publisher sends logs to Papertrail using its HTTPs Bulk Event Endpoint. This publisher supports disk caching for log messages if they cannot be sent out to the service.

v2: 
  publisher: 
    swi-logs-http-bulk: 
      all: 
        ## SWI bulk logs endpoint (defaults to https://logs.collector.solarwinds.com/v1/logs) 
        #endpoint: https://logs.collector.solarwinds.com/v1/logs 
 
        ## Timeout for sending data to the endpoint when the data buffer is not full (defaults to 10s) 
        #timeout: "10s" 
 
        ## Token for authorization with logs ingestion endpoint 
        ## Should match the API token generated in Loggly or Papertrail application settings for your account 
        token: SOLARWINDS_TOKEN 
 
        ## Should tags be added to message. Default is false.  
        #prefix_with_tags: false 
 
        ## Max number of messages in bulk. Default is 1000. 
        #bulk_size: 1000 
 
        ## Max time publisher will wait before sending next bulk. Default is 10s. 
        #max_bulk_time: "10s" 
 
        ## Control cache behavior. When commented out cache will be disabled. 
        #cache: 
        # <cache options here>

Papertrail syslog

The Papertrail syslog publisher sends logs to Papertrail using its Syslog Endpoints.

v2: 
  publisher: 
    papertrail-syslog: 
      all: 
        ## Sign up for a Papertrail account at: https://papertrailapp.com 
        ## Papertrail host and port details: change this to YOUR papertrail host. 
        host: "HOST.papertrailapp.com" 
        port: 12345 
        protocol: tls 
         
        ## Override the hostname used for logs reported by this agent. Defaults to the OS-provided hostname. 
        # hostname: "myhost" 
 
        ## Should tags be added to message. Default is false.  
        #prefix_with_tags: false 

Local caching of logs

HTTP-based publishers can locally cache logs if they cannot be sent out to the service, as in the case of a network outage. The cache persists when the agent is restarted. It can be configured in many ways, including modifying the size and cache strategy.

        ## Control cache behavior. When commented out cache will be disabled. 
        #cache: 
          ## Directory where cache files will be stored 
          #dir_path: "/opt/SolarWinds/Snap/cache" 
 
          ## Name of cache within directory 
          #name: "swi-logs-http-bulk" 
 
          ## Maximum size of single entry that might be stored in cache in bytes. By default: 100kB 
          #max_entry_size: 102400  
 
          ## Maximum size of single cache file in bytes (entire cache is stored in separated files for optimization) 
          ## By default: 1MB 
          #max_bytes_per_file: 1048576 
 
          ## Determines how often cache will be synced (written to disk). 
          ## You can specify syncing either after writing n new elements and after specific time period 
          #sync_after_n_writes: 5 
          #sync_interval: "5s" 
 
          ## Timeout before sending next data from cache if send error occured. By default: 5s. 
          #resend_interval: "5s" 
 
          ## If total size of cache is exceeded there is no place for new elements in the cache.  
          ## If true the oldest elements are removed from cache and new elements might be stored.  
          ## If false new elements won't be added to cache. 
          #remove_older_entries_when_total_size_exceeded: false 

Troubleshooting

Too many open files

When the collector stops forwarding lines and in SolarWinds Snap Agent logs (typically located in /var/log/SolarWinds/Snap/swisnapd.log) you notice errors like this:

ERRO[2019-04-18T13:35:30-04:00] time="2019-04-18T13:35:30-04:00" level=error msg="follower error" error="too many open files" self-skip-logs-collector= submodule=remote_syslog _module=plugin-exec io=stderr plugin=logs

it means your OS limits have to be leveraged to handle monitoring of all the requested files. You can determine the maximum number of inotify (an underlying files watcher) instances that can be created using:

cat /proc/sys/fs/inotify/max_user_instances

and then increase this limit using:

echo VALUE >> /proc/sys/fs/inotify/max_user_instances

where VALUE is greater than the present setting.

When you confirm that the limits are met and the logs collector/forwarder works as expected you should apply this new value permanently by adding the following to /etc/sysctl.conf:

fs.inotify.max_user_instances = VALUE

Navigation Notice: When the APM Integrated Experience is enabled, AppOptics shares a common navigation and enhanced feature set with other integrated experience products. How you navigate AppOptics and access its features may vary from these instructions.

The scripts are not supported under any SolarWinds support program or service. The scripts are provided AS IS without warranty of any kind. SolarWinds further disclaims all warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The risk arising out of the use or performance of the scripts and documentation stays with you. In no event shall SolarWinds or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the scripts or documentation.