OP5 Monitor ["OP5 Monitor"]

Slim Poller 2.0

Note: This feature is a technology preview and is only available for limited release.

Overview

Slim Poller 2.0 brings new capabilities for deployment in Kubernetes environments.

This page walks you through all the new features and provides instructions on setting up and deploying your master and Slim Poller nodes in a Kubernetes environment.

New features

UUID

It is now possible to enable node identification using UUID instead of using IP. This can be useful if your TCP packets have a non-unique outgoing IP address, such as when behind a NAT or if your nodes' incoming IP addresses and outgoing IP addresses differ. This is also useful for setting up multiple Slim Pollers in Kubernetes.

There are two new Merlin settings used to configure the UUID:

  • ipc_uuid — set the UUID of a specific node. This is a top-level configuration in /opt/monitor/op5/merlin.conf.

  • uuid — identify a connecting node with a given UUID. This must be set in the node configuration.

A UUID must have a length of 36 characters. To generate a well-formed UUID, you can use mon id generate.

The following are examples for a master and a poller UUID configuration:

Copy

master merlin.conf

....
poller op5-slim-poller-ssh-65799d958f-h9wjt {
        uuid = de5c4eb9-dc9e-4b53-831c-246d254ad39e
        hostgroup = k8s_group
        address = IP_ADDRESS
        port = 15551
}
Copy

poller merlin.conf

log_level = info;
use_syslog = 1;
ipc_uuid = de5c4eb9-dc9e-4b53-831c-246d254ad39e
...
master master {
  address = MASTER_IP
  port = 15551
  connect = no
}

 

In the above case, the master identifies the poller with its UUID. However, the poller is using regular IP identification to identify connections from the master. This is common in a passive poller mode, where the master does not do active connections to the poller.

If you want to use UUID to identify both components, you would need to add ipc_uuid to the master merlin.conf file, and the corresponding uuid setting in the master node configuration on the poller.

File sync for mon oconf fetch

It is possible to sync files between OP5 Monitor nodes. For guidance, see How to Synchronize files and folders with Merlin in load balanced setup.

By default, files are pushed from one node to another. However, it is also possible to fetch files from remote. For example, a poller could be set up to fetch custom plugins from a master server.

To set up file fetching, first you must set up the node to fetch from a master. Then, you need to add a sync section.

Files from the sync section are only synced when using the --sync argument with the mon oconf fetch command. An example configuration can be seen below:

Copy
poller.conf
master master {
  address = IP_ADDRESS
  port = 15551
  sync {
    /opt/plugins/custom/
  }
  object_config {
    fetch_name = poller
    fetch = mon oconf fetch --sync master
  }
}

 

Files are only synced when a Naemon configuration change is done. If you need to trigger it in other situations, see mon oconf remote-fetch.

mon oconf remote-fetch

The command mon oconf remote-fetch will tell a remote node to do a fetch against the current node. This will only work if the remote node is correctly configured to fetch from the node that calls mon oconf remote-fetch.

This command can be useful if you want to manually trigger the poller to fetch a new file; for example, if you have added a new custom plugin.

It is possible to trigger a fetch on a specific node:

Copy
mon oconf remote-fetch poller-name

Or a type of node:

Copy
mon oconf remote-fetch type=poller

Command usage:

Copy
remote-fetch     [--type=<peer|poller> [<node>]
       Tells a specific node to fetch split configuration from this node.

       NOTE: A configuration variable called "fetch_name" is
       required in the object_config section of merlin.cfg on the
       remote node. The variable should be set to the name of the node,
       as seen by the master.

Cluster update

cluster_update is a Merlin module setting which takes a command. The command is run when a node gets a signal from a master that its cluster configuration is invalid. Use this setting to create a script to automatically update the cluster configuration.

The following is used for the autoscaling functionality in the Slim Poller. You can create a custom script to use instead.

Copy
merlin.conf
module {
cluster_update = /usr/local/bin/cluster_tools.py --update
log_file = stdout;
notifies = no
}

Autoscaling Slim Poller in Kubernetes

With Slim Poller 2.0, it is possible to make use of the following Kubernetes features:

  • Autoscaling in Kubernetes. For more information, see Horizontal Pod Autoscaler.

  • kubectl command to manually increase the number of Slim Pollers running.

Setup for autoscaling

In order to achieve autoscaling, the following is set up:

  • A container entry script that connects to a designated master, and registers on the cluster to all relevant masters and peers.

  • The cluster_update module is set up to detect any changes to the cluster. A connection is established with the designated master, and the cluster configuration is updated.

  • Slim Pollers are identified by UUID on the master, but not towards peers.

  • The address of each Slim Poller must be an address that is reachable from within the Kubernetes cluster; for example, the pod IP. It is not necessary for this IP to be reachable from masters outside the Kubernetes cluster.

A number of environment variables are used to configure this, such as the master IP, poller hostgroup, and so on. For more information, see Setting environment variables.

Setting environment variables

For autoscaling to work correctly, you need to set up the following environment variables:

Environment variable Description
MASTER_ADDRESS The address of the designated master node.
MASTER_NAME Name of the master node.
MASTER_PORT

Merlin port of the master node.

By default, this is set to 15551.

POLLER_ADDRESS

The address that this poller should use. Use the Kubernetes pod IP.

POLLER_NAME Name of the poller. In autoscaling, this name is generated by Kubernetes.
POLLER_HOSTGROUPS

One or more hostgroups that the poller is responsible for. If there are multiple hostgroups, specify them in a comma-separated list.

These hostgroups must exist on the master server prior to container startup.

FILES_TO_SYNC Optional. Comma-separated list of paths to sync from the master server.

 

The following example shows a YAML file with the environment variables configured for autoscaling in Kubernetes:

Copy
example-autoscaling.yaml
         env:
         - name: MASTER_ADDRESS
           value: "IP_ADDRESS_HERE"
         - name: MASTER_NAME
           value: "master"
         - name: MASTER_PORT
           value: "15551"
         - name: POLLER_ADDRESS
           valueFrom:
             fieldRef:
               fieldPath: status.podIP
         - name: POLLER_NAME
           valueFrom:
             fieldRef:
               fieldPath: metadata.name
         - name: POLLER_HOSTGROUPS
           value: "GROUP1, GROUP2"
         - name: FILES_TO_SYNC
           value: "/opt/plugins/custom/, /opt/plugins/check_nagios"

Installing SSH keys

In order for autoregistering to work correctly, the Slim Poller container must have SSH keys installed that are authorized at the master server. One way to achieve this is to create a new Docker image from the Slim Poller image.

Caution: The SSH key added in this image must also be added to /opt/monitor/.ssh/authorized_keys on the designated master server.

The following are example commands for installing the SSH:

Copy
FROM op5com/slim-poller_naemon-core:slim-poller-2.0-prerelease

COPY --chown=monitor:root id_rsa /opt/monitor/.ssh/id_rsa
COPY --chown=monitor:root id_rsa.pub /opt/monitor/.ssh/authorized_keys

RUN chmod 600 /opt/monitor/.ssh/id_rsa
RUN chmod 644 /opt/monitor/.ssh/authorized_keys

Scaling to a higher number of replicas

After starting a Slim Poller deployment, you can scale up the replicas manually by using the kubectl command:

Copy
kubectl scale deployment.v1.apps/op5-slim-poller --replicas=2

 

You can also use Kubernetes autoscaling. For more information, see Horizontal Pod Autoscaler.

Example Kubernetes deployment file

Copy
slim-poller-kubernetes.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: op5-slim-poller
  name: op5-slim-poller
spec:
  replicas: 1
  selector:
    matchLabels:
      app: op5-slim-poller
      name: op5-slim-poller
  template:
    metadata:
      labels:
        app: op5-slim-poller
        name: op5-slim-poller
    spec:
      volumes:
        - name: ipc
          emptyDir: {}
        - name: merlin-conf
          emptyDir: {}
      terminationGracePeriodSeconds: 240
      containers:
       - name: naemon-core
         image: op5com/slim-poller_naemon-core:slim-poller-2.0-prerelease
         volumeMounts:
          - mountPath: /var/run/naemon
            name: ipc
          - mountPath: /opt/monitor/op5/merlin
            name: merlin-conf
         livenessProbe:
           exec:
             command:
             - /usr/bin/mon
             - containerhealth
             - core
           initialDelaySeconds: 120
           periodSeconds: 30
           failureThreshold: 5
         env:
         - name: MASTER_ADDRESS
           value: "MASTER_IP"
         - name: MASTER_NAME
           value: "master"
         - name: MASTER_PORT
           value: "15551"
         - name: POLLER_ADDRESS
           valueFrom:
             fieldRef:
               fieldPath: status.podIP
         - name: POLLER_NAME
           valueFrom:
             fieldRef:
               fieldPath: metadata.name
         - name: POLLER_HOSTGROUPS
           value: "group1, group2"
         - name: FILES_TO_SYNC
           value: "/opt/plugins/custom/"
       - name: naemon-merlin
         image: op5com/slim-poller_naemon-merlin:slim-poller-2.0-prerelease
         livenessProbe:
           exec:
             command:
             - /usr/bin/mon
             - containerhealth
             - merlin
           initialDelaySeconds: 20
           periodSeconds: 30
         ports:
          - containerPort: 15551
            name: op5-merlin
            protocol: TCP
         volumeMounts:
          - mountPath: /var/run/naemon
            name: ipc
          - mountPath: /opt/monitor/op5/merlin
            name: merlin-conf
      restartPolicy: Always

Installation steps

Preparation of master

For the Slim Poller 2.0 to work correctly, an update to the master is required. All peered masters must be set up accordingly:

  1. Install OP5 Monitor version 8.2.5, as normal.

  2. After installation, open the file, /etc/yum.repos.d/op5-release.repo and update it to have the following content:

    Copy

    op5-release.repo

    [op5-monitor-slim-poller-2.0]
    name=op5 Monitor Slim Poller 2.0
    baseurl=http://repos.op5.com/el7/x86_64/monitor/slim-poller-2.0/updates/
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-op5

    ##### op5 Monitor
    [op5-monitor-updates]
    name=op5 Monitor Updates
    baseurl=http://repos.op5.com/el$releasever/$basearch/monitor/2020.k/updates
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-op5

    ##### op5 Epel
    [op5-epel]
    name=op5 EPEL
    baseurl=http://repos.op5.com/epel/7/$basearch
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

     

    Save the updates.

  3. Execute yum clean all && yum update.

Before you can set up the Slim Poller, ensure that the pollers are authorized to connect to the master. To do so, set up the SSH keys on the masters. For guidance, see Installing SSH keys.

Docker image

The Docker image can be found on Docker Hub, and is named slim-poller-2.0-prerelease. To deploy the Docker image into Kubernetes, see the Example Kubernetes deployment file.

Before starting the Slim Poller in Kubernetes, ensure that the hostgroups, specified in the POLLER_HOSTGROUPS environment variable, exist on the master.