Photo by Sudhith Xavier on Unsplash

Photo by Sudhith Xavier on Unsplash

A watchdog for every modern *ix server

Old but Still Gold

Article from ADMIN 77/2023
Monit is a lightweight, performant, and seasoned solution that you can drop into old running servers or bake into new servers for full monitoring and proactive healing.

Every business is an IT business at the back end comprising physical and virtual servers. The backbone of modern IT systems is a cloud-based infrastructure made up primarily of GNU/Linux servers. In the cloud-native world of modern IT, servers should be intelligent and designed to auto-heal proactively in case of internal problems. Unfortunately, many businesses still operate in old world reactive mode when it comes to server operations. Sooner or later enough server issues pile up to create frequent outages and resulting revenue losses.

A solution to this problem is Monit [1], a lightweight, free, open source solution that monitors *ix systems and performs automatic maintenance and repair. I use Footloose container machines to test everything covered in this article. A commonly available Docker engine is the only requirement to test the example code shown in this article.

Getting Started

As indicated by the name itself, the first set of functionalities provided by Monit is watching over process, file, FIFO, filesystem, directory, system, program, and remote host server resources. To begin, I'll explore monitoring the common system, filesystem, and process resources.

The first thing to do is set up a container machine running Monit. Listing 1 [2] creates the Ubuntu 22.04 LTS base image used to create further images for the test container machines. To generate the base image, go to your terminal and execute the command:

docker build -f Dockerfile_UbuntuJammyJellyfish . -t ubuntujjf

Listing 1


01 FROM ubuntu:22.04
03 ENV container docker
05 # Don't start any optional services except for the few we need.
06 RUN find /etc/systemd/system /lib/systemd/system -path '*.wants/*' -not -name '*journald*' -not -name '*systemd-tmpfiles*' -not -name '*systemd-user-sessions*' -exec rm \{} \;
08 RUN apt-get update && apt-get install -y dbus systemd openssh-server net-tools iproute2 iputils-ping curl wget vim-tiny sudo && apt-get clean && rm -rf /var/lib/apt/lists/*
10 RUN >/etc/machine-id
11 RUN >/var/lib/dbus/machine-id
13 EXPOSE 22
15 RUN systemctl set-default
16 RUN systemctl mask dev-hugepages.mount sys-fs-fuse-connections.mount systemd-update-utmp.service systemd-tmpfiles-setup.service console-getty.service
17 RUN systemctl disable networkd-dispatcher.service
19 # This container image doesn't have locales installed. Disable forwarding the
20 # user locale env variables or we get warnings such as:
21 #  bash: warning: setlocale: LC_ALL: cannot change locale
22 RUN sed -i -e 's/^AcceptEnv LANG LC_\*$/#AcceptEnv LANG LC_*/' /etc/ssh/sshd_config
24 #
27 CMD ["/bin/bash"]

Next, create Dockerfile_UbuntuJJFMonit (Listing 2) and (Listing 3); then, create an image containing everything required to run Monit by executing the command:

docker build -f Dockerfile_ UbuntuJJFMonit .-t ubuntujjfmnt:5.33.0

Listing 2


01 FROM ubuntujjf
03 COPY /usr/local/bin/
04 RUN && rm /usr/local/bin/

Listing 3

001 set -uo pipefail
003 MONITVER='5.33.0'
004 MONITBDIR='/opt/monit/bin'
005 MONITCDIR='/opt/monit/conf'
006 MONITSDIR='/opt/monit/monit.d'
007 MONITVFLE='/lib/systemd/system/monit.service'
008 RQRDCMNDS="chmod
009           cp
010           echo
011           rm
012           systemctl
013           tar
014           tee
015           wget"
017 preReq() {
019   for c in ${RQRDCMNDS}
020   do
021     if ! command -v "${c}" > /dev/null 2>&1
022     then
023       echo " Error: required command ${c} not found, exiting ..."
024       exit 1
025     fi
026   done
028 }
030 instlMonit() {
032   if ! wget "${MONITVER}/monit-${MONITVER}-linux-x64.tar.gz" -O /tmp/monit.tgz
033   then
034     echo "wget${MONITVER}/monit-${MONITVER}-linux-x64.tar.gz -O /tmp/monit.tgz failed, exiting ..."
035     exit 1
036   fi
038   if ! tar -C /tmp -zxf /tmp/monit.tgz
039   then
040     echo "tar -C /tmp -zxf /tmp/monit.tgz failed, exiting ..."
041     exit 1
042   else
043     mkdir -p "${MONITBDIR}"
044     if ! cp "/tmp/monit-${MONITVER}/bin/monit" "${MONITBDIR}/monit"
045     then
046       echo "cp /tmp/monit-${MONITVER}/bin/monit ${MONITBDIR}/monit failed, exiting ..."
047       exit 1
048     else
049       rm -rf /tmp/monit{.tgz,-"${MONITVER}"}
050     fi
051   fi
053 }
055 cnfgrMonit() {
057   mkdir "${MONITCDIR}"
058   tee "${MONITCDIR}/monitrc" <<EOF
059 #CHECK SYSTEM monitdemo
060 set daemon 10
061 set log /var/log/monit.log
063 set mail-format { from: monit@richnusgeeks.demo }
065 set httpd port 2812 and
066   use address
067   allow localhost
068   allow
069   allow admin:monit
070   allow guest:guest readonly
072 include /opt/monit/monit.d/*
073 EOF
075   if ! chmod 0600 "${MONITCDIR}/monitrc"
076   then
077     echo "chmod 0600 ${MONITCDIR}/monitrc failed, exiting ..."
078     exit 1
079   else
080     mkdir "${MONITSDIR}"
081   fi
083 }
085 setupMonitSrvc() {
087   tee "${MONITVFLE}" <<'EOF'
088 [Unit]
089 Description=Pro-active monitoring utility for Unix systems
091 Documentation=man:monit(1)
093 [Service]
094 Type=simple
095 KillMode=process
096 ExecStart=/opt/monit/bin/monit -I -c /opt/monit/conf/monitrc
097 ExecStop=/opt/monit/bin/monit -c /opt/monit/conf/monitrc quit
098 ExecReload=/opt/monit/bin/monit -c /opt/monit/conf/monitrc reload
099 Restart=on-abnormal
100 StandardOutput=journal
101 StandardError=journal
103 [Install]
105 EOF
107   if ! systemctl enable monit
108   then
109     echo ' systemctl enable monit failed, exiting ...'
110     exit 1
111   fi
113 }
115 main() {
117   preReq
118   instlMonit
119   cnfgrMonit
120   setupMonitSrvc
122 }
124 main 2>&1

Finally, create footloose.yaml (Listing 4) and bring up your Monit container machine with the command:

footloose create && echo && footloose show

Listing 4


01 cluster:
02   name: cluster
03   privateKey: cluster-key
04 machines:
05 - count: 1
06   spec:
07     backend: docker
08     image: ubuntujjfmnt:5.33.0
09     name: monit%d
10     privileged: true
11     portMappings:
12     - containerPort: 22
13     - containerPort: 2812

Now access the Monit web user interface (UI) through localhost:<local port for 2812> in either Chrome Incognito or Firefox Private Window mode so the browser does not cache the entered credentials. You should be presented with a login dialog box. After entering the credentials guest /guest , you should see the Monit page as shown in Figure 1. Congrats, your faithful servant Monit is ready to serve your servers.

Figure 1: Monit web UI.

By default, the page shows the system name as the machine hostname and provides real-time information about machine load, CPU, memory, and swap usage. Clicking on the system name brings up another page with more real-time info about your machine. To launch another page that shows the runtime and configuration details, click on running under the Monit Service Manager heading.

The setup and configuration of Monit is a cakewalk with; just extract and drop its binary into your server, create its control file, and run it. You have now configured Monit to allow read-only guest and full admin users. If you log in with the admin /monit credentials, you should see some buttons to disable and enable monitoring, view the logfile, an so on (Figures 2 and 3).

Figure 2: Admin user with a button to enable or disable monitoring.
Figure 3: More buttons for the admin user.

Next, you should explore the Monit command line and its configuration file. To get into your machine and see the help screen, enter:

footloose ssh root@monit0
/opt/monit/bin/monit -c /opt/monit/conf/monitrc -h

An important command,

/opt/monit/bin/monit -c /opt/monit/conf/monitrc -t

verifies that your Monit control file is syntactically correct; otherwise, Monit won't start running. If you need to try or apply new Monit configuration setting(s), you should perform the syntax check first before applying them with:

/opt/monit/bin/monit -c /opt/monit/conf/monitrc reload

Command-line options can stop and start the services you put under the Monit watchdog, as well as enable and disable monitoring. These options are useful, for example, when performing server maintenance during downtime. Similarly, you have options to see the status, get a summary, or create a report to gather information about server resources under Monit.

The control or configuration file for Monit is monitrc as created by in Listing 3. If you're still logged in to your Monit container machine, you could dump it with:

cat /opt/monit/conf/monitrc

If you don't want Monit to take your hostname from your system name by default, to conform to a unique identification scheme or avoid conflicts with any same service name running under Monit, then add CHECK SYSTEM <unique name> in the monitrc file. The set daemon statement (line 60) sets the polling interval in seconds (10 seconds in this example). The Monit daemon will wake up after this duration every time to watch over and take actions over the server resources under it.

The set log statement (line 61) configures Monit logging, and set mail-format (line 63) configures a default email address to which a message is sent whenever any event occurs on any service. The web UI port for Monit is configured through set httpd port (line 65), and you need to mention the binding address for the web UI with the use address line.

Lines 67-70 allow internal and external hosts to access the web UI and set their various <user>:<password> properties. You can clearly see that the guest user is configured as read only, but the admin user has full control. If you don't want to configure web UI password(s) in plaintext, you could use a .htpasswd file. I used an online htpasswd generator link [3] to generate .htpasswd file entries corresponding to guest and admin users (Listing 5). The limitation of htpasswd is that you can't use read only in this case.

Listing 5

.htpasswd Configuration Diff

>   allow md5 /etc/.htpasswd admin guest
>   tee /etc/.htpasswd <<'EOF'
> guest:$apr1$gz4n7s6o$P.O/V1k9rZuV9nN/5lh3l0
> admin:$apr1$esczj7wu$ffu/6j8vETMAMJaVTKn7a1

Monit is configured to load service-specific configurations from the /opt/monit/monit.d directory with an include line. You will require more lines to configure settings for mail format, mail server for alerts, and so on when setting up Monit beyond this quick test setup in this article. The Monit documentation [4] does a good job explaining everything in great detail for settings not covered here.

Monit displays a lot of system information by default without configuring it to watch over any specific server resources. The first check to configure is related to the filesystem so that Monit can monitor and take action on the basis of the criteria. Listing 6 shows the diff for the additional code required to set up the script to use in creating another container machine (Listing 7) image with the command:

docker build -f Dockerfile_ UbuntuJJFMonitFSC . -t ubuntujjfmntfsc:5.33.0

Listing 6

setup_monit_srvc{,_fscheck}.sh Diff

> cnfgrMonitFSCheck() {
>   mkdir "${MONITSDIR}"
>   tee "${MONITSDIR}/spaceinode" <<EOF
> check filesystem rootfs with path /
>   if space usage > ${DSPCWMARK}% then alert
>   if inode usage > ${INDEWMARK}% then alert
> }
>   cnfgrMonitFSCheck

Listing 7

Dockerfile_UbuntuJJFMonit{,FSC} Diff

< FROM ubuntujjf
> FROM ubuntujjfmnt:5.33.0
< COPY /usr/local/bin/
> COPY /usr/local/bin/

Once the new image is created successfully, you can clean up the already running Monit test machine with the command:

footloose delete

Now change your footloose.yaml as shown in the diff in Listing 8 and execute the command

footloose create && echo && footloose show

Listing 8

footloose.yaml Diff

<     image: ubuntujjfmnt:5.33.0
<     name: monit%d
>     image: ubuntujjfmntfsc:5.33.0
>     name: monitfsc%d

to bring up the new container machine. Access to the Monit web UI is through the local port corresponding to 2812 of the container machine; now you should see a filesystem check entry (Figure 4).

Figure 4: Monit filesystem check configured.

Clicking on rootfs takes you to another page with more details about the mounted filesystem partition. Whenever this partition filled space or inode usage hits the percent watermark set, Monit fires an alert to the configured email address. You could further set some meaningful actions like running a space cleanup script to let Monit proactively and automatically fix a space crunch issue.

Now you should feel comfortable enough to run Monit and configure more checks covering other vital server resources. The Monit documentation provides complete detailed information about various resources and the corresponding configuration syntax.

Monit Superpowers

I played with Monit's great capabilities and was able to monitor almost all the vital resources of my *ix servers. However, its real potential is taking meaningful actions in response to various events with the use of a domain-specific language (DSL) to bake auto-healing and remediation into servers. In this way, you can automate all of your server-related, otherwise manual runbook actions to make your IT operations proactive. For example, Monit could run handlers to clean the server disk volumes automatically when the storage watermark hits some threshold.

In another case, the Monit watchdog could restart your server processes in a required order if they consume a large amount of system resources. Monit could handle all the actions that an operations team is supposed to take manually to keep the lights on. Moving reactive manual operations on your general and business-critical servers to a proactive watchdog means profit and profit only.

The next test arrangement will familiarize you with Monit's auto-healing functionality: a Kafka server container machine running ZooKeeper and Kafka processes along with a Dockerized Kafka web UI. The diff of the helper script from (Listing 9) is used in the following Dockerfile to set up everything required (Listing 10). To create a new image for the new test container machine, execute:

docker build -f Dockerfile_ UbuntuJJFMonitKafka . -t ubuntujjfmntkfk:3.4.0

Listing 9

Dockerfile_UbuntuJJFMonit{FSC,Kafka} Diff

> KAFKAVER='3.4.0'
> KFKSCVER='2.13'
< RQRDCMNDS="chmod
> RQRDCMNDS="apt-get
>           chmod
>           mkdir
>           usermod
<   if ! wget "${MONITVER}/monit-${MONITVER}-linux-x64.tar.gz" -O /tmp/monit.tgz
>   if ! wget -q "${MONITVER}/monit-${MONITVER}-linux-x64.tar.gz" -O /tmp/monit.tgz
> setupDckrCmps() {
>   if ! wget -q "" -O /tmp/
>   then
>     echo "wget -O /tmp/ failed, exiting ..."
>     exit 1
>   fi
>   if ! sh /tmp/
>   then
>     echo "sh /tmp/ failed, exiting ..."
>     exit 1
>   else
>     rm -f /tmp/
>     if ! usermod -aG docker "$(whoami)"
>     then
>      echo "usermod -aG docker $(whoami) failed, exiting ..."
>       exit 1
>     fi
>   fi
> }
> setupKafkaWebUI() {
>   mkdir -p /opt/docker/compose
>   tee /opt/docker/compose/kafkawebui.yml <<EOF
> version: "2.4"
> services:
>   kafkawebui:
>     image:
>     container_name: kafkawebui
>     hostname: kafkawebui
>     mem_limit: 512m
>     network_mode: host
>     restart: unless-stopped
>     environment:
>       - KAFKA_BROKERS=localhost:9092
>   tee "${MONITSDIR}/kfkwebui" <<EOF
> check program kfkwebui with path "/usr/bin/docker compose -f /opt/docker/compose/kafkawebui.yml up -d"
>   if status != 0 then unmonitor
> }
> setupZkprKfk() {
>   if ! DEBIAN_FRONTEND=noninteractive apt-get install -y -qq --no-install-recommends openjdk-8-jre-headless >/dev/null
>   then
>     echo "apt-get install -y --no-install-recommends openjdk-8-jre-headless failed, exiting ..."
>     exit 1
>   fi
>   if ! wget -q "${KAFKAVER}/kafka_${KFKSCVER}-${KAFKAVER}.tgz" -O /tmp/kafka.tgz
>   then
>     echo "wget${KAFKAVER}/kafka_${KFKSCVER}-${KAFKAVER}.tgz failed, exiting ..."
>     exit 1
>   fi
>   if ! tar -C /tmp -zxf /tmp/kafka.tgz
>   then
>     echo "tar -C /tmp -zxf /tmp/kafka.tgz failed, exiting ..."
>     exit 1
>   else
>     if ! mv "/tmp/kafka_${KFKSCVER}-${KAFKAVER}" /opt/kafka
>     then
>       echo "mv /tmp/kafka_${KFKSCVER}-${KAFKAVER} /opt/kafka failed, exiting ..."
>       exit 1
>     else
>       rm -f /tmp/kafka.tgz
>     fi
>   fi
> }
> cnfgrMonitKafkaCheck() {
>   tee "${MONITSDIR}/zkprkfk" <<EOF
> check process zookeeper matching zookeeper
>   start program = "/opt/kafka/bin/ -daemon /opt/kafka/config/" with timeout 60 seconds
>   stop program = "/opt/kafka/bin/ /opt/kafka/config/" with timeout 60 seconds
>   if failed port 2181 for 6 cycles then restart
> check process kafka matching kafkaServer
>   depends on zookeeper
>   start program = "/opt/kafka/bin/ -daemon /opt/kafka/config/" with timeout 60 seconds
>   stop program = "/opt/kafka/bin/ /opt/kafka/config/" with timeout 60 seconds
>   if failed port 9092 for 6 cycles then restart
> }
>   setupDckrCmps
>   setupKafkaWebUI
>   setupZkprKfk
>   cnfgrMonitKafkaCheck

Now clean up the running container machine and bring up the new test machine by executing the footloose commands shown in the previous section with the footloose.yaml diff shown in Listing 11. Accessing the Monit web UI now should show ZooKeeper and Kafka processes and kfkwebui program server resources. Clicking on these processes will take you to the respective runtime and configuration details of these services. You could also access the Kafka web UI, brought up by Monit program resources, by executing a docker compose command to indicate a ready Kafka server (Figure 5).

Listing 10

setup_monit_srvc_{fscheck,kafka}.sh Diff

< FROM ubuntujjfmnt:5.33.0
> FROM ubuntujjfmntfsc:5.33.0
< COPY /usr/local/bin/
> COPY /usr/local/bin/

Listing 11

footloose.yaml Diff

<     image: ubuntujjfmntfsc:5.33.0
<     name: monitfsc%d
>     image: ubuntujjfmntkfk:3.4.0
>     name: monitkfk%d
>     - containerPort: 8080
Figure 5: Kafka cluster web UI overview.

I configured Monit to start and watch over a process matching the ZooKeeper pattern first. Once the ZooKeeper process is up and running, a dependent Kafka process matching the kafkaServer pattern is started. You could use a process ID (PID) file in the check process instead of pattern matching if your process creates that file.

The Monit command-line option procmatch is handy if you need to establish a pattern by trial and error to search a unique process of interest. These processes under the Monit watchdog are sensed as alive when they respond on their respective ports. If the port response check fails for a number of consecutive cycles, Monit will keep on restarting the process with stop and start commands until the port check succeeds. Note that you should adjust the start and stop time outs on the basis of the process runtime. In case of service dependencies, the dependent process is stopped first and started after the successful restart of the process on which it depends.

To see the auto-healing functionality of Monit in real time, you can crash the ZooKeeper service with:

footloose ssh root@monitkfk0 /opt/kafka/bin/ /opt/kafka/config/

This command stops the ZooKeeper process, and the Monit web UI should show the process failures highlighted in red and blue. After the configured cycles, Monit takes necessary automatic remediation actions to start both processes, as indicated by the Monit log entries in Figure 6, which should then show up in green in the web UI.

Figure 6: Monit ZooKeeper and Kafka auto-healing log entries.

In general, you can formulate your service checks and the corresponding auto-remediation actions on the basis of the many runtime criteria of a resource. You can find the necessary details and syntax in the Monit documentation. After adding the proactive healing capabilities to your servers, you can sleep peacefully without learning the rocket science.

Last but Not Least

Although you might be enamored of the Monit superpowers, you might be thinking about centrally controlling your fleet of servers with Monit embedded. M/Monit [5] is the centralized portal-based user-friendly solution that gathers captured data and controls Monit operations. Once Monit is in your systems, you can centralize the whole server watchdog mechanism with M/Monit at a fraction of the cost of a majority of the few commercial counterparts matching its capabilities. The M/Monit license has a one-time cost that doesn't expire. The company behind the open source M/Monit provides 30-day trial binaries for various *ix flavors and macOS.

The first thing required to use M/Monit is to create a Docker image to launch and run it as a container machine. I use the Dockerfile_UbuntuJJFMMonit Dockerfile (Listing 12), which makes use of the ubuntujjf image created previously. Also shown is the helper script diff from the previously described (Listing 13) that sets up everything required to run M/Monit in its container machine.

Listing 12


01 ROM ubuntujjf
03 COPY /usr/local/bin/
04 RUN && rm /usr/local/bin/

Listing 13 Diff

> MMONITVER='3.7.14'
> MMONITLDIR='/opt/mmonit'
> MMONITBDIR='/opt/mmonit/bin'
> MMONITCDIR='/opt/mmonit/conf'
> MMONITVFLE='/lib/systemd/system/mmonit.service'
>           mv
<   if ! wget "${MONITVER}/monit-${MONITVER}-linux-x64.tar.gz" -O /tmp/monit.tgz
>   if ! wget -q "${MONITVER}/monit-${MONITVER}-linux-x64.tar.gz" -O /tmp/monit.tgz
< CHECK SYSTEM monitdemo
> CHECK SYSTEM mmonitdemo
> set eventqueue basedir /var/monit/ slots 1000
> set mmonit http://monit:monit@localhost:8080/collector
<   use address
<   allow
< #  allow admin:monit
< #  allow guest:guest readonly
<   allow md5 /etc/.htpasswd admin guest
>   allow monit:monit
<     mkdir "${MONITCDIR}"
>     mkdir "${MONITSDIR}"
<   tee /etc/.htpasswd <<'EOF'
< guest:$apr1$gz4n7s6o$P.O/V1k9rZuV9nN/5lh3l0
< admin:$apr1$esczj7wu$ffu/6j8vETMAMJaVTKn7a1
> cnfgrMMonitCheck() {
>   tee "${MONITSDIR}/mmonit" <<EOF
> check process mmnt matching mmoni
>   start program = "/usr/bin/systemctl start mmonit" with timeout 20 seconds
>   stop program = "/usr/bin/systemctl stop mmonit" with timeout 20 seconds
>   if failed port 8080 for 2 cycles then restart
> }
> instlMMonit() {
>   if ! wget "${MMONITVER}-linux-x64.tar.gz" -O /tmp/mmonit.tgz
>   then
>     echo "wget${MMONITVER}-linux-x64.tar.gz -O /tmp/monit.tgz failed, exiting ..."
>     exit 1
>   fi
>   if ! tar -C /tmp -zxf /tmp/mmonit.tgz
>   then
>     echo "tar -C /tmp -zxf /tmp/mmonit.tgz failed, exiting ..."
>     exit 1
>   else
>     if ! mv "/tmp/mmonit-${MMONITVER}" "${MMONITLDIR}"
>     then
>       echo "mv /tmp/monit-${MMONITVER} ${MMONITLDIR} failed, exiting ..."
>       exit 1
>     else
>       rm -rf /tmp/mmonit.tgz
>     fi
>   fi
> }
> setupMMonitSrvc() {
>   tee "${MMONITVFLE}" <<'EOF'
> [Unit]
> Description=System for automatic management and pro-active monitoring of Information Technology Systems.
> Documentation=
> [Service]
> Type=simple
> KillMode=process
> ExecStart=/opt/mmonit/bin/mmonit -i -c /opt/mmonit/conf/server.xml start
> ExecStop=/opt/mmonit/bin/mmonit -i -c /opt/mmonit/conf/server.xml stop
> Restart=on-abnormal
> StandardOutput=journal
> StandardError=journal
> [Install]
>   if ! systemctl enable mmonit
>   then
>     echo ' systemctl enable monit failed, exiting ...'
>     exit 1
>   fi
> }
>   cnfgrMMonitCheck
>   instlMMonit
>   setupMMonitSrvc

You shouldn't be surprised to see that even mmonit is under the faithful Monit watchdog. Who says you can't have your cake and eat it too? Setting up a central M/Monit server configures a few more Monit settings. The set eventqueue statement configures an event queue that provides safe event storage for MMonit in the case of temporary problems. The set mmonit line is just pointing to the M/Monit server with the preconfigured user credentials to allow the pushing of events and data from the Monit service.

The same MMonit user credentials are also added to the allowed list. Now create an image for an MMonit container machine with the command:

docker build -f Dockerfile_ UbuntuJJFMMonit . -t ubuntujjfmmnt:3.7.14

Next, clean up your old container machine and bring up a new one with the footloose.yaml file shown in Listing 14 after creating the required container machine network with the command:

docker network create mmonit-demo

Listing 14

MMonit footloose.yaml

01 cluster:
02   name: cluster
03   privateKey: cluster-key
04 machines:
05 - count: 1
06   spec:
07     backend: docker
08     image: ubuntujjfmmnt:3.7.14
09     name: mmonit%d
10     privileged: true
11     networks:
12     - mmonit-demo
13     portMappings:
14     - containerPort: 22
15     - containerPort: 8080

Now access the MMonit portal through the localhost:<local port shown for 8080> address in your web browser, and you should see an MMonit login page, where you can log in to the MMonit portal with the monit /monit credentials to get to a Dashboard page. If you click on menu items shown on the top of the page, you'll stumble onto the local running Monit and MMonit Status and Analytics pages (Figures 7 and 8). If you log in to MMonit with the admin /swordfish credentials, you'll have access to an Admin menu to configure MMonit, including Users, Alerts, and so on.

Figure 7: MMonit host page.
Figure 8: MMonit analytics charts.

To go further, add more container machines to run various service checks and pump more data to MMonit by creating images that launch data-generating servers running a normal Monit service, Monit with a filesystem check, and Monit with Kafka service checks. You just need to create new images with the changes shown in Listing 15, delete the running test machine, and bring up a new container machine cluster with the changes shown in Listing 16.

Listing 15

monitrc Diff

diff monitrc
> set eventqueue basedir /var/monit/ slots 1000
> set mmonit http://monit:monit@mmonit0:8080/collector
>   allow monit:monit

Listing 16

footloose.yaml Diff

> - count: 1
>   spec:
>     backend: docker
>     image: ubuntujjfmntmmnt:5.33.0
>     name: mmonitmnt%d
>     privileged: true
>     networks:
>     - mmonit-demo
>     portMappings:
>     - containerPort: 22
> - count: 1
>   spec:
>     backend: docker
>     image: ubuntujjfmntfscmmnt:5.33.0
>     name: mmonitfsc%d
>     privileged: true
>     networks:
>     - mmonit-demo
>     portMappings:
>     - containerPort: 22
> - count: 1
>   spec:
>     backend: docker
>     image: ubuntujjfmntkfkmmnt:3.4.0
>     name: mmonitkfk%d
>     privileged: true
>     networks:
>     - mmonit-demo
>     portMappings:
>     - containerPort: 22
>     - containerPort: 8080

Now when you access the MMonit portal, you'll see multiple hosts feeding the system and service check data (Figures 9 and 10). If you log in with the admin credentials, you will see action buttons as well to control your services centrally without accessing the individual server Monit web UI. Now you should be at home with MMonit, and I encouraged you to go through its detailed documentation to customize as per your needs.

Figure 9: Multiple Monit-running hosts.
Figure 10: Host services action buttons.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=