Documentation forSolarWinds Platform Self-Hosted

Kubernetes requirements, deployment command examples, and container removal steps

This topic applies only to the following products:

SolarWinds Observability Self-Hosted

SAMVMAN

Kubernetes (K8s) is one of the environments supported by the Container Monitoring feature. You can also monitor Kubernetes services in Microsoft Azure.

To monitor Kubernetes containers in the SolarWinds Platform, you'll need:

Kubernetes 1.23 and later

  • A Kubernetes platform with Metrics API enabled for the cluster. See Set up the metrics server.

  • The container management probe should have sufficient permissions (read-access) to the Kubernetes API.

  • SSH access to the master server

  • Sudo privileges on the master server

  • Ports:

    • 38012: SolarWinds Observability Self-Hosted endpoint that listens for data coming from the container management probe.

Kubernetes 1.16 - 1.22

  • A Kubernetes platform with one of the following API versions enabled:

    • v1 and later

    • rbac.authorization.k8s.io/v1beta1

    • rbac.authorization.k8s.io

    • apps/v1beta1

    • extensions/v1beta1

  • Ports:

    • 4043: Target port/Container port (internal K8s communication)

    • 10250: Listening port for Kubelet agent

    • 30043: Node port (internal K8s communication)

    • 38012: SolarWinds Observability Self-Hosted endpoint that listens for data coming from the container management probe.

  • SSH access to the master server

  • Sudo privileges on the master server

  • A weaveworks/scope:1.13.2 image in the repository.

You can also monitor containers hosted in the Azure Kubernetes Service (AKS). See Azure Kubernetes Service documentation for requirements.

Third-party links in this section are attributed to © 2020 Microsoft Corp., available at docs.microsoft.com, obtained on October 30, 2020.

Set up the metrics server

Setting up the metrics server is a requirement for monitoring Kubernetes 1.23 or later.

The probe is a stand-alone application running in a Kubernetes pod. It is responsible for querying the Kubernetes API. The data is sent to a SolarWinds Observability Self-Hosted endpoint and further processed in SolarWinds Observability Self-Hosted.

SolarWinds Observability Self-Hosted exposes an HTTP REST endpoint that listens for data from the container management probe on TCP port 38012 (TLS).

See Kubernetes requirements in Kubernetes documentation.

  1. To deploy the metrics-server to your cluster, deploy and configure the pod following the official Kubernetes instructions.

    See Installation instructions for Kubernetes metrics server in GitHub.

    You can use the following command:

    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
    
  2. Verify the metrics-server plugin.

    Run the following command to verify that the metrics-server is available. The output should inform you that there is a metrics-server-[id] ready and running.

    kubectl get pods -A
  3. Run the following command to verify that the metrics-server pod is successfully exposing Kubernetes resource metrics in the Kubernetes API server. This command should return CPU and memory metrics for all pods in the cluster:

    kubectl top pods -A
  4. Expose the resource metrics API to the Container Management probe.

    This requires a specific cluster configuration. You can use the following yaml as a template and modify it to fit your cluster configuration:

    The deployment definition contains --kubelet-insecure-tls flag. In production environments, SolarWinds recommends you have all certificates signed by a cluster authority. When you have the certificates signed, remove the flag.

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        k8s-app: metrics-server
        rbac.authorization.k8s.io/aggregate-to-admin: "true"
        rbac.authorization.k8s.io/aggregate-to-edit: "true"
        rbac.authorization.k8s.io/aggregate-to-view: "true"
      name: system:aggregated-metrics-reader
    rules:
    - apiGroups:
      - metrics.k8s.io
      resources:
      - pods
      - nodes
      verbs:
      - get
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        k8s-app: metrics-server
      name: system:metrics-server
    rules:
    - apiGroups:
      - ""
      resources:
      - nodes/metrics
      verbs:
      - get
    - apiGroups:
      - ""
      resources:
      - pods
      - nodes
      verbs:
      - get
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: view-metrics
    rules:
    - apiGroups:
        - "*"
      resources:
        - pods
        - nodes
      verbs:
        - get
        - list
        - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server-auth-reader
      namespace: kube-system
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: extension-apiserver-authentication-reader
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server:system:auth-delegator
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:auth-delegator
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: system:metrics-server
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:metrics-server
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: view-metrics
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: view-metrics
    subjects:
      - apiGroup: rbac.authorization.k8s.io
        kind: User
        name: system:serviceaccount:orion:default
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      ports:
      - name: https
        port: 443
        protocol: TCP
        targetPort: https
      selector:
        k8s-app: metrics-server
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          k8s-app: metrics-server
      strategy:
        rollingUpdate:
          maxUnavailable: 0
      template:
        metadata:
          labels:
            k8s-app: metrics-server
        spec:
          containers:
          - args:
            - --cert-dir=/tmp
            - --secure-port=4443
            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
            - --kubelet-use-node-status-port
            - --metric-resolution=15s
            - --kubelet-insecure-tls
            image: registry.k8s.io/metrics-server/metrics-server:v0.6.4
            imagePullPolicy: IfNotPresent
            livenessProbe:
              failureThreshold: 3
              httpGet:
                path: /livez
                port: https
                scheme: HTTPS
              periodSeconds: 10
            name: metrics-server
            ports:
            - containerPort: 4443
              name: https
              protocol: TCP
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: /readyz
                port: https
                scheme: HTTPS
              initialDelaySeconds: 20
              periodSeconds: 10
            resources:
              requests:
                cpu: 100m
                memory: 200Mi
            securityContext:
              allowPrivilegeEscalation: false
              readOnlyRootFilesystem: true
              runAsNonRoot: true
              runAsUser: 1000
            volumeMounts:
            - mountPath: /tmp
              name: tmp-dir
          nodeSelector:
            kubernetes.io/os: linux
          priorityClassName: system-cluster-critical
          serviceAccountName: metrics-server
          volumes:
          - emptyDir: {}
            name: tmp-dir
    ---
    apiVersion: apiregistration.k8s.io/v1
    kind: APIService
    metadata:
      labels:
        k8s-app: metrics-server
      name: v1beta1.metrics.k8s.io
    spec:
      group: metrics.k8s.io
      groupPriorityMinimum: 100
      insecureSkipTLSVerify: true
      service:
        name: metrics-server
        namespace: kube-system
      version: v1beta1
      versionPriority: 100
     
  5. Save the configuration as a yaml file and run the following command on your Kubernetes cluster host:

    kubectl apply -f metrics-server-cman.yaml
  6. Run the following command to verify that the SolarWinds Platform account has access to the list pods command. The output should say yes.

    kubectl auth can-i list pods --as=system:serviceaccount:orion:default
  7. Run the following command to verify that the SolarWinds Platform account has access to the Kubernetes Metrics API. Replace [NODE_NAME] with the name of your Kubernetes node. The output should say yes.

    kubectl auth can-i get /api/v1/nodes/[NODE_NAME]/proxy/metrics --as=system:serviceaccount:orion:default

Configure the Container Management probe in the SolarWinds Platform Web Console

The Container Management probe connects to Metrics Server API to gather the cluster statistics.

  1. In the SolarWinds Platform Web Console, click Settings > All Settings, and click Manage Container Services.

  2. Click Add Service, provide a name for the service and in Environment type, select Kubernetes (v1.23-latest).

  3. Complete the Manage container service wizard. See Add a container service.

  4. When you have successfully completed all the commands, Container Management probe starts gathering cluster data from Kubernetes Resource API.

In a few minutes (the default polling interval is 300 seconds), your Kubernetes cluster statistics will display in the SolarWinds Platform Web Console.

Set up AKS container monitoring

This section describes how to set up Azure Kubernetes Service (AKS) to communicate with the SolarWinds Platform. Refer to AKS documentation to configure AKS.

To configure AKS container monitoring in the SolarWinds Platform:

  1. If a VPN does not yet exist, create a point-to-site VPN connection from Azure to your local network, which involves setting up a Virtual Network (VNet), a VNet gateway subnet, VNet gateway, a VM, and a root certificate for the VPN client.

  2. Create the Kubernetes service in Azure. To set up permissions, see Security concepts for applications and clusters.

  3. On the SolarWinds Platform server:

    1. Connect to the Azure VPN.

    2. Install the Azure Command-Line Interface (CLI).

    3. Log into Azure and connect to the AKS cluster. Click here for details.

      If you cannot locate the cluster, use the --subscription trigger in the get credentials command.

      You can use the Kubernetes web dashboard or Kubernetes resource view to monitor your configuration. If you encounter permission issues with the web dashboard, use the following command:

      kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard

  4. Follow steps in Add a container service to finish adding the service via the Manage Container Service wizard.

If container monitoring stops, see Troubleshoot container monitoring.

Delete Kubernetes namespaces from nodes

For Docker, Docker Swarm, and Apache Mesos, you need to delete containers and container images from nodes before you deleting a container service in the SolarWinds Platform Web Console. For Kubernetes, delete namespaces from the node instead. With Kubernetes, namespaces are logical entities that represent cluster resources for usage of a set of users — in this case, the "user" is the SolarWinds Platform.

  1. Connect to the node via SSH.
  2. Run the following command:

    sudo kubectl delete namespace orion

  3. When the service status switches to Down on the Container Services page, delete the container service by selecting it, and then clicking Delete.