Photo by Volodymyr Hryshchenko on unsplash

Photo by Volodymyr Hryshchenko on unsplash

Run your own chat server

Choosing the Red Pill

Article from ADMIN 75/2023
By
The Matrix open standard implements secure, decentralized communication and comes with bridges to popular chat apps.

Millions of users communicate with central chat services such as WhatsApp or Telegram, entrusting their messages to a centralized platform and its operators. If this meets your needs, you can take the blue pill and pass on to the next article. But if you are interested in Matrix as an alternative, which enables chats without an external provider but still offers the established services, take the red pill and proceed.

The choice between the red and blue pills is not quite as difficult for chat systems as it was for saving humanity in the 1999 movie The Matrix . However users are often faced with the decision as to whether they want to trust their chat and group communications to large corporations like Meta (Facebook, WhatsApp) and Alphabet (Google) or a corporate construct like Telegram, or whether they would rather not. If you appreciate the convenience of a modern chat application with groups but prefer to back up your data yourself, Matrix [1] is a serious alternative. However, setting it up requires some work and, of course, a dedicated server.

Open Protocol

Matrix itself is not software. It is an open source, end-to-end encrypted protocol for chat and real-time communication. As an open standard, it ensures that different software implementations like Element (client) and Synapse (server) remain compatible with each other. The principle of Matrix is simple: You run your own Matrix server for an Internet domain (e.g., domain.com ) and create your users with names following the pattern @name:domain.com .

Up to this point, everything looks be very simple and you seem to have yet another closed chat service. However, Matrix lets you publish the server on the Internet (federation) so that users of different Matrix servers can connect to each other. In simple terms, Matrix follows the example of the Simple Mail Transfer Protocol (SMTP) mail server, which distributes local messages on the closed network and forwards external messages to the respective SMTP servers in other domains. As with mail services, self-hosted solutions with their own servers are mixed with cloud offerings like matrix.org . However, Matrix users are spared the spam that mail users have to live with every day because, unlike SMTP, the Matrix protocol does not deliver messages without being asked. If a user of a third-party domain wants to make chat contact, the recipient must agree.

Of course, if you set up your own Matrix server, you then face the problem that many of your existing contacts will still remain on WhatsApp, Facebook Messenger, or Telegram. Alternatively, you could work with closed systems like Discord or Slack. Matrix offers a whole series of bridges for this purpose that connect to the third-party services for the respective user and forward messages to Matrix. With the correct configuration, you then only need to use a single Matrix chat client through which you can communicate with other Matrix users, as well as with all other chat platforms, through the one Matrix server.

In this article, I show you how to set up your own Matrix server with the open source Synapse software, use matching clients, and set up a bridge for WhatsApp. Of course, Synapse can be run on a virtual machine (VM) with Debian or Fedora Linux, but I deploy the server in a Podman container (you can also use Docker), which then works largely independently of the server operating system.

Preparations

Before you can set up the Synapse server for Matrix, which is written in Python, you need to take care of a few preparations. The Matrix service in the container uses the HTTP protocol on port 8008 or HTTPS on port 8448. You could now release port 8448, route the Matrix traffic there, and give the Synapse server a valid Let's Encrypt certificate, for example. However, that is more work than necessary.

In this example, the Synapse service runs on a rented server (Hetzner) with a single IP address. The server is also used by a number of other services. The incoming traffic is therefore distributed by an NGINX reverse proxy to the various service containers according to the name of the services. Packets for service1.domain.com go to a different container than requests to service5.domain.com . Additionally, the NGINX server handles secure socket layer (SSL) termination for all services and manages the domain's Let's Encrypt certificate. HTTP is then all you need between the proxy and the services.

For this Synapse example, I used the DNS name matrix.domain.com and the regular HTTPS port 443. Depending on how you run your Docker or Podman setup, NGINX forwards traffic to an internal bridge IP address or mapped port. In this case, it is a heterogeneous infrastructure that is gradually migrating services from traditional VMs to containers, which is why these containers run on the same bridged network as the VMs and why each container has its own internal IP address. Another reason is security. Access to services on an internal network can be controlled and monitored more easily through the firewall than if your containers are bound directly to the externally accessible interface by port mapping.

The reverse proxy configuration for the Synapse server (/etc/nginx/conf.d/matrix.conf) looks something like Listing 1. All HTTPS traffic for marix.domain.com:443 now reaches port 8008 of the container over the internal bridge IP 192.168.122.26. If you are running containers without a local bridge, the entry is:

Listing 1

Synapse Reverse Proxy

server {
   listen 443 ssl http2;
   server_name matrix.domain.com;
   ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem; # managed by Certbot
   ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem; # managed by Certbot
   [...] (SSL and LOG parameters)
   location ~ ^(/_matrix|/_synapse/client) {
     client_max_body_size 100M;
     proxy_pass http://192.168.122.26:8008;
     proxy_set_header X-Forwarded-For $remote_addr;
     proxy_set_header X-Forwarded-Proto $scheme;
     proxy_set_header Host $host;
   }
}
proxy_pass http://127. 0.0.1: 8008;

To the outside world, your Synapse server goes by the name of matrix.domain.com , but the service should use names like @user1:domain.com , not @user1:matrix.domain.com . By extension, you need to run your Synapse server with the domain.com configuration and at the same time tell third-party servers that chat messages for domain.com should be routed to the server on https://matrix.domain.com:443.

For discovery purposes, the Matrix protocol uses two options. A server that wants to contact another domain first tries to locate the target server with a DNS query. Synapse follows the example of SNMP and publishes the mail server of a domain with the MX DNS record. Matrix asks for an SRV entry named _matrix._tcp (i.e, in this example, for _matrix._tcp.domain.com ). The response must then be matrix.domain.com and port 443.

However, not all domain owners can easily create or modify DNS records. If the DNS query for the SRV record fails, the Matrix protocol takes a different tack. It runs a REST API request over HTTPS to the domain name with the URL /.well-known/matrix/server (in this case to https://domain.com/.well-known/matrix/server). Matrix expects an HTTP 200 response in JSON format. In this case, this response is also passed by the NGINX reverse proxy. The configuration is usually in the basic configuration of the NGINX server at /etc/nginx/nginx.conf (Listing 2).

Listing 2

NGINX Reverse Proxy

server {
   listen 443 ssl http2;
   server_name domain.com;
   root /usr/share/nginx/html/blank;
   [...] (SSL configuration)
   location /.well-known/matrix/server
   {
     default_type application/json;
     return 200 '{"m.server": "matrix.domain.com:443"}';
   }
}

Configuring Synapse

To configure the Synapse server, you first need a configuration file. The tool kindly creates these when first launched. First create a directory on the container host where Synapse will store its data (i.e., /var/pods/synapse in this case), and then start the container with:

podman run -it --rm --name synapse --volume /var/pods/synapse:/data:Z -e SYNAPSE_SERVER_NAME=domain.com -e SYNAPSE_REPORT_STATS=yesdocker.io/matrixdotorg/synapse:latest generate

If you are using Docker instead of Podman, you need to run the docker run command as root – with the Z at the end of the volume specification on systems with SELinux active. The SYNAPSE_REPORT_STATS switch lets Synapse send anonymous operating statistics to the developers. If you do not want this, specify no here.

After the command executes, you end up with files homeserver.yaml and log.config and a signing key in the /var/pods/synapse directory. In the YAML file you will find the basic configuration for the service, the database, and a registration_shared_secret. Keep this shared secret in a safe place – you will need it to register new users – but remove it from homeserver.yaml after the initial setup.

As a database, Synapse uses SQLite in the basic configuration. Although this setup is fine for small test installations, if you set up a Synapse server for dozens of users and chat rooms, you will definitely want to use a PostgreSQL server instead. The Synapse documentation describes in detail how to set up this database, but it does not necessarily have to happen immediately during the first trial run. The documentation also describes how to transfer an existing SQLite database to PostgreSQL retroactively and change the Synapse server to match. In this test setup, I decided to stick with SQLite.

With the appropriate configuration, you can now launch your Synapse container. Depending on the network configuration, you run it on the host IP with the parameter -p 8008:8008 to make the HTTP port available on 127.0.0.1:8008. This setup uses Synapse on the bridge with

-- net virt_net --ip 192.168.122.26 --mac-address 53:54:C0:A8:7A:1A

(i.e., with its own IP address). Then, create a service definition for the Synapse server so that systemd can run the pod at system startup (see the "Synapse.service for Systemd" box).

Synapse.service for Systemd

The service definition is located in /etc/systemd/system/synapse.service. If you enable the service with

systemctl enable synapse.service

the host starts the container automatically at boot time. This service file uses the bridge network virt_net. If you work without a bridge, leave out the --net, --ip, and --mac-address lines and add -p 8008:8008 instead:

[Unit]
   Description=synapse
   After=network-online.target
   Wants=network-online.target
[Service]
   ExecStartPre=mkdir -p /var/pods/synapse
   ExecStartPre=-/bin/podman kill synapse
   ExecStartPre=-/bin/podman rm synapse
   ExecStartPre=-/bin/podman pull docker.io/matrixdotorg/synapse:latest
   ExecStart=/bin/podman run
     --name synapse
     --volume /var/pods/synapse:/data:Z
     -e SYNAPSE_SERVER_NAME=domain.com
     -e SYNAPSE_REPORT_STATS=yes
     --net virt_net
     --ip 192.168.122.26
     --mac-address 52:54:C0:A8:7A:1a docker.io/matrixdotorg/synapse:latest
   ExecStop=/bin/podman stop synapse
[Install]
   WantedBy=multi-use

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=