Documentation forAppOptics

Monitoring Kubernetes with AppOptics

AppOptics provides both monitoring for Kubernetes as a cluster, as well as the containers and processes running inside it. Monitoring applications running in Kubernetes is done via our APM agents. To learn more about Kubernetes plugin please see its documentation.


There are 3 basic ways to monitor a Kubernetes cluster with AppOptics: "bare-metal" on the Kubernetes master, as a pod and as a sidecar.

"Bare-metal" setup

In this setup, the SolarWinds Snap Agent is running on the Kubernetes master.

  • First, the solarwinds user must have access to the kubeconfig.

    • By default, Kubernetes config is stored in /etc/kubernetes/admin.conf
    • Copy the Kubernetes configuration file to destination specified in Kubernetes plugin config and change its owner to solarwinds user

      $ sudo mkdir -p /opt/SolarWinds/Snap/etc/.kube
      $ sudo cp /etc/kubernetes/admin.conf /opt/SolarWinds/Snap/etc/.kube/config
      $ sudo chown -R solarwinds:solarwinds /opt/SolarWinds/Snap/etc/.kube
    • Example plugin config:

        incluster: false
        kubeconfigpath: "/opt/SolarWinds/Snap/etc/.kube/config"
        interval: 60s
        plugin: snap-plugin-collector-aokubernetes
        task: task-aokubernetes.yaml
    • Note: the plugin config must include the incluster: false setting, which denotes that the SolarWinds Snap Agent is not running as pod in Kubernetes cluster
  • You can easily interact with SolarWinds Snap Agent via swinsap CLI. For additional information on command please see swisnap CLI.

    $ swisnap task list

Pod setup

In this setup, the SolarWinds Snap Agent is running in a Kubernetes pod.

  • Clone solarwinds-snap-agent-docker repository:

    $ git clone
  • Kubernetes assests available in solarwinds-snap-agent-docker repository expect a solarwinds-token secret to exist. To create this secret run:

    $ kubectl create secret generic solarwinds-token -n kube-system --from-literal=SOLARWINDS_TOKEN=<REPLACE WITH TOKEN>


  • By default, RBAC is enabled in the deploy manifests. If you are not using RBAC you can deploy swisnap-agent-deployment.yaml removing the reference to the Service Account.
  • In the configMapGenerator section of kustomization.yaml, you can configure which plugins should be run by setting SWISNAP_ENABLE_<plugin_name> to either true or false. Plugins turned on via environment variables are using default configuration and taskfiles. To see list of plugins currently supported this way please refer to: Environment paremeters
  • After configuring deployment to your needs (please refer to Integrating AppOptics Plugins with Kubernetes) and ensuring that solarwinds-token secret was already created run:

    $ kubectl create -f swisnap-agent-deployment.yaml
  • Finally, check if the deployment is running properly:

    $ kubectl get deployment swisnap-agent-k8s -n kube-system
  • Enable the Kubernetes plugin in the AppOptics UI and you should start seeing data trickle in.


  • The DaemonSet, by default, will give you insight into containers running within its nodes and gather system, processes and docker-related metrics. To deploy the DaemonSet to Kubernetes verify you have an solarwinds-token secret already created and run:

    $ kubectl apply -k ./deploy/overlays/stable/daemonset
  • Enable the Docker plugin in the AppOptics UI and you should start seeing data trickle in.

Sidecar setup

If you wanted to run this on Kubernetes as a sidecar for monitoring specific services, you can follow the instructions below which use Apache Server as an example. In this setup, the agent will monitor only services running in particular pod(s), not Kubernetes itself.

  • Useful when you want to monitor only specific per-pod-services
  • Configuration is similar to pod setup
  • In order to monitor specific services only, the kubernetes and aosystem plugins should be disabled by setting SWISNAP_ENABLE_KUBERNETES to false and SWISNAP_DISABLE_HOSTAGENT to true in swisnap-agent-deployment.yaml

Example: Running Apache Server with sidecar SolarWinds Snap Agent

Containers inside the same pod can communicate through localhost, so there's no need to pass a static IP - Resource sharing and communication

In order to monitor Apache with the agent in a sidecar, add a second container to your deployment YAML underneath spec.template.spec.containers and the agent should now have access to your service over localhost (notice SWISNAP_ENABLE_APACHE):

- name: apache
    imagePullPolicy: Always
    image: '<your-image>'
    - containerPort: 80
- name: swisnap-agent-ds
    image: 'solarwinds/solarwinds-snap-agent-docker:1.0.0'
    imagePullPolicy: Always
        value: 'SOLARWINDS_TOKEN'
            fieldPath: spec.nodeName
        value: 'false'
        value: 'true'
        value: 'true'
    - name: HOST_PROC
        value: '/host/proc'

In the example above, the sidecar will run only the Apache plugin. Additionally, if the default Apache Plugin configuration is not sufficient, custom one should be passed to pod running SolarWinds Snap Agent - Integrating AppOptics Plugins with Kubernetes.

Integrating AppOptics Plugins with Kubernetes

In case of "bare-metal" setup please follow steps described in Integrations. When SolarWinds Snap Agent is running inside Kubernetes Pod, integrating AppOptics Plugins with Kubernetes requires some additional steps. SolarWinds Snap Agent image is using default plugins configuration files and tasks manifests. In order to use your own configuration you would have to create Kubernetes configMap. In this example we’ll set up two configMaps, one for SolarWinds Snap Agent Kubernetes plugin config and second one for corresponding task.

# create plugin configMap
kubectl create configmap kubernetes-plugin-config --from-file=/path/to/my/plugins.d/kubernetes.yaml --namespace=kube-system

# create task configMap
kubectl create configmap kubernetes-task-manifest --from-file=/path/to/my/tasks.d/task-aokubernetes.yaml --namespace=kube-system

# check if everything is fine
kubectl describe configmaps --namespace=kube-system kubernetes-task-manifest kubernetes-plugin-config

ConfigMaps should be attached to SolarWinds Snap Agent deployment. Here's the example, notice spec.template.spec.containers.volumeMounts and spec.template.spec.volumes:

diff --git a/deploy/base/deployment/kustomization.yaml b/deploy/base/deployment/kustomization.yaml
index 79e0110..000a108 100644
--- a/deploy/base/deployment/kustomization.yaml
+++ b/deploy/base/deployment/kustomization.yaml
@@ -15,7 +15,7 @@ configMapGenerator:
diff --git a/deploy/base/deployment/swisnap-agent-deployment.yaml b/deploy/base/deployment/swisnap-agent-deployment.yaml
index 294c4b4..babff7d 100644
--- a/deploy/base/deployment/swisnap-agent-deployment.yaml
+++ b/deploy/base/deployment/swisnap-agent-deployment.yaml
@@ -45,6 +45,12 @@ spec:
             - configMapRef:
                 name: swisnap-k8s-configmap
+            - name: kubernetes-plugin-vol
+              mountPath: /opt/SolarWinds/Snap/etc/plugins.d/kubernetes.yaml
+              subPath: kubernetes.yaml
+            - name: kubernetes-task-vol
+              mountPath: /opt/SolarWinds/Snap/etc/tasks.d/task-aokubernetes.yaml
+              subPath: task-aokubernetes.yaml
             - name: proc
               mountPath: /host/proc
               readOnly: true
@@ -56,6 +62,18 @@ spec:
               cpu: 100m
               memory: 256Mi
+        - name: kubernetes-plugin-vol
+          configMap:
+            name: kubernetes-plugin-config
+            items:
+              - key: kubernetes.yaml
+                path: kubernetes.yaml
+        - name: kubernetes-task-vol
+          configMap:
+            name: kubernetes-task-manifest
+            items:
+              - key: task-aokubernetes.yaml
+                path: task-aokubernetes.yaml
         - name: proc
             path: /proc

Notice that we're not utilizing [Environment Parameters](#environment-parameters) to turn on Kubernetes plugin. When you're attaching taskfiles and plugin configuration files through configMaps, there's no need to set environment variables SWISNAP_ENABLE_<plugin-name>. SolarWinds Snap Agent will automatically load plugins based on files stored in configMaps.

Integrating Kubernetes Cluster Events Collection With Loggly

Version 22 of the Kubernetes collector allows you to collect cluster events and push them to Loggly using logs collector. You need to create corresponding configmaps in your cluster with the proper plugins configuration. The example config files can be found in Event collector configs. To enable event collection in your deployment, follow below steps:

  1. Create a Kubernetes secret for SOLARWINDS_TOKEN:

    kubectl create secret generic solarwinds-token -n kube-system --from-literal=SOLARWINDS_TOKEN=SOLARWINDS_TOKEN


    • SOLARWINDS_TOKEN: your SolarWinds AppOptics token.
  2. Modify the logs-v2.yaml file to include your Loggly token and subdomain in the v2:collector:logs:all:logging_service:loggly:token and host fields.

            #  [...]
              # [..]
              ## Sign up for a Loggly account at:
                ## Loggly API token and host
                token: "LOGGLY_TOKEN"
                host: ""
                ## Loggly API port and protocol
                ## use 6514 with TLS or 514 with TCP
                port: 6514
                protocol: tls


  3. The task-logs-k8s.yaml file configures the logs collector plugin. It is telling logs collector to look for /var/log/SolarWinds/Snap/events.log file. You can leave this file unmodified.

    version: 2
      type: cron
      interval: "0 * * * * *"
      - plugin_name: logs
          - /logs/lines_total
          - /logs/lines_forwarded
          - /logs/bytes_forwarded
          - /logs/lines_skipped
          - /logs/lines_failed
          - /logs/bytes_failed
          - /logs/lines_succeeded
          - /logs/bytes_succeeded
          - /logs/attempts_total
          - /logs/failed_attempts_total
              # [...]
                - Path: /var/log/SolarWinds/Snap/events.log
              # [...]
                - ".*self-skip-logs-collector.*"
          - plugin_name: publisher-appoptics
  4. The kubernetes.yaml file configures the kubernetes collector plugin. You can leave the file unmodified, which will watch for normal events in the default namespace (as shown in the code below) or you can define filters in the field.

          incluster: true
          kubeconfigpath: ""
          interval: "60s"
          events: |
            # Embedded YAML (as a multiline string literal)
            - namespace: default
              type: normal
          grpc_timeout: 30
      plugin: snap-plugin-collector-aokubernetes
      task: task-aokubernetes.yaml
  5. To monitor events count in AppOptics, edit the task-aokubernetes.yaml task manifest so it contains the /kubernetes/events/count metric in workflow.collect.metrics list and copy the .yaml file to your working directory:

    version: 1
      type: streaming
    deadline: "55s"
            MaxCollectDuration: "2s"
            MaxMetricsBuffer: 250
          /kubernetes/events/count: {}
          /kubernetes/pod/*/*/*/status/phase/Running: {}
        - plugin_name: publisher-appoptics
            period: 60
            floor_seconds: 60
  6. Create three configmaps:

       kubectl create configmap plugin-configs --from-file=./examples/event-collector-configs/logs-v2.yaml --from-file=./examples/event-collector-configs/kubernetes.yaml --namespace=kube-system
       kubectl create configmap task-manifests --from-file=./examples/event-collector-configs/task-aokubernetes.yaml --namespace=kube-system
       kubectl create configmap task-autoload --from-file=./examples/event-collector-configs/task-logs-k8s.yaml --namespace=kube-system

       kubectl describe configmaps -n kube-system plugin-configs task-manifests task-autoload
  7. Create an Events Collector Deployment. This will automatically create a corresponding ServiceAccount:

    kubectl apply -k ./deploy/overlays/stable/events-collector/

Watch your cluster events in Loggly


After you've successfully set up the SolarWinds Snap Agent, enable the Kubernetes plugin in the AppOptics UI and you should see data pour in. More about Charts and Dashboards.


When the APM Integrated Experience is enabled, AppOptics shares a common navigation and enhanced feature set with the other integrated experiences' products. How you navigate AppOptics and access its features may vary from these instructions. For more information, go to the APM Integrated Experience documentation.

The scripts are not supported under any SolarWinds support program or service. The scripts are provided AS IS without warranty of any kind. SolarWinds further disclaims all warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The risk arising out of the use or performance of the scripts and documentation stays with you. In no event shall SolarWinds or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the scripts or documentation.