I recently found myself setting up a Mastodon server. Elon Musk threatening to buy Twitter reminded me how fragile it is to live in walled gardens controlled by others. I know something of what it takes to run large-scale applications in the cloud. It’s my job. (Senior Director of Engineering, Platform, at Pendo.) I have some clue what it costs, and I’m not upset at being presented advertisements in Twitter. I even like being recommended content that I didn’t know that I wanted to find!

But I’m not satisfied with what The Algorithm shows me, and I don’t like how my timeline (yes, even after choosing chronological) mutates halfway through my reading a tweet that I wanted to finish but which I’ll never find again. It’s like reading a book when I am dreaming; as soon as I realize I’m reading a book, it fades away and I can’t read it any more. Dreams and Twitter both frustrate me this way.

Many folks don’t remember what it was like when a lot of email was not compatible across systems. Most really don’t understand what a gift it is that you can just ask someone for their “email address” and don’t have to ask them which email service you have to sign up with to talk to them. (The fact that people sometimes ask for your “gmail address” as if there were no other email provider demonstrates that keeping interoperating systems is not the low-energy state.)

I’m not boycotting twitter. I’m still there. But… Remember how as interoperable email started to win, the walled garden email providers eventually had to bridge with it to stay relevant to their uesrs? The more of us move content into the Fediverse, the more pressure there will be for Twitter — and similar networks — to bridge. Right now it’s still niche. But the only way for it to become more mainstream is for more of us to adopt it. (If you don’t remember, just take my word for it.)

And I like having more control over what I see, more options for how to interact. And I really appreciate open standards, like ActivityPub and ActivityStreams.

I am making two contributions to add straw to the camel’s back here: Running Maker Forums Social Mastodon and writing this document about how I deployed it on a modern OS in a way that I believe will make it easy to maintain on an ongoing basis.

Credit

This was heavily based on sleeplessbeastie’s How to take advantage of Docker to install Mastodon but adjusted to work on CentOS 9 derivatives like RHEL, AlmaLinux, and Rocky Linux, and a bit more opinionated, such as recommending a particular backup solution.

Installation

I started from AlmaLinux 9, and the configuration I used during installation was:

  • Set hostname to the fully-qualified domain name
  • Select “Custom Storage Configuration”, then “Create default configuration”
  • Decreased swap to 2GB — if you are swapping much, this system won’t work well anyway
  • At least 10GB root (my 15GB appears to be excessive)
  • At least 15GB /var/lib/docker or /var/lib/containers (for podman) (my 25GB appears to be excessive)
  • The rest of the space for /opt/mastodon
  • Software Selection
    • Server (instead of the default server with GUI) on the left, then on the right add
    • Guest Agents
    • Debugging Tools
    • Mail Server

Preparation

If you need to temporarily allow password login by SSH during installation, the first task after installation is to set up SSH keys and then disable password login. After confirming that key-based login works, modify /etc/ssh/sshd_config to say PasswordAuthentication no and then systemctl restart sshd.

Some of the containers need to reach out to the internet to do their jobs, and the server needs to respond to http and https requests.

# firewall-cmd --add-service http --add-service https --zone public
# firewall-cmd --zone=public --add-masquerade
# firewall-cmd --runtime-to-permanent

Elasticsearch is needed for search, and it needs to increase this setting:

# echo "vm.max_map_count=262144" > /etc/sysctl.d/90-max_map_count.conf
# sysctl --load /etc/sysctl.d/90-max_map_count.conf

Use memory more efficiently:

# echo 'vm.overcommit_memory=1' > /etc/sysctl.d/90-vm_overcommit_memory.conf
# sysctl --load /etc/sysctl.d/90-vm_overcommit_memory.conf

SELinux needs one change for this to work.

# setsebool -P httpd_can_network_connect 1

If you do not plan to use cockpit, you can further reduce footprint. I chose:

# dnf erase $(rpm -qa | grep cockpit) --exclude=jq,parted,gdisk

Install nginx on the host

# dnf install nginx

Installing docker

Now you have a choice:

  • Use docker? Read this post.
  • Use podman? Read that post. This is currently failing for me, so I am not recommending it at this time.

I used the CentOS Docker package instructions converting from yum usage to dnf usage:

# dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# dnf install --allowerasing docker-ce docker-ce-cli containerd.io docker-compose-plugin

(I needed to include --alloweraseing because of a conflict with podman and buildah that are already installed; they need to be erased to install docker.)

# systemctl enable --now docker

Application

Now you’re finally ready to start installing Mastodon itself. Make room for Mastadon to live in:

# mkdir -p /opt/mastodon/database/{postgresql,pgbackups,redis,elasticsearch}
# mkdir -p /opt/mastodon/web/{public,system}
# chown 991:991 /opt/mastodon/web/{public,system}
# chown 1000 /opt/mastodon/database/elasticsearch
# chown 70:70 /opt/mastodon/database/pgbackups
# cd /opt/mastodon
# touch application.env database.env

Create /opt/mastodon/docker-compose.yml

version: '3'

services:
  postgresql:
    image: postgres:14-alpine
    env_file: database.env
    restart: always
    shm_size: 512mb
    healthcheck:
      test: ['CMD', 'pg_isready', '-U', 'postgres']
    volumes:
      - postgresql:/var/lib/postgresql/data
      - pgbackups:/backups
    networks:
      - internal_network

#  pgbouncer:
#    image: edoburu/pgbouncer:1.12.0
#    env_file: database.env
#    depends_on:
#      - postgresql
#    healthcheck:
#      test: ['CMD', 'pg_isready', '-h', 'localhost']
#    networks:
#      - internal_network

  redis:
    image: redis:7-alpine
    restart: always
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
    volumes:
      - redis:/data
    networks:
      - internal_network

  redis-volatile:
    image: redis:7-alpine
    restart: always
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
    networks:
      - internal_network

  elasticsearch:
    image: elasticsearch:7.17.4
    restart: always
    env_file: database.env
    environment:
      - cluster.name=elasticsearch-mastodon
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - ingest.geoip.downloader.enabled=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test: ["CMD-SHELL", "nc -z elasticsearch 9200"]
    volumes:
      - elasticsearch:/usr/share/elasticsearch/data
    networks:
      - internal_network

  website:
    image: tootsuite/mastodon:v3.5.3
    env_file: 
      - application.env
      - database.env
    command: bash -c "bundle exec rails s -p 3000"
    restart: always    
    depends_on:
      - postgresql
#      - pgbouncer
      - redis
      - redis-volatile
      - elasticsearch
    ports:
      - '127.0.0.1:3000:3000'
    networks:
      - internal_network
      - external_network
    healthcheck:
      test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:3000/health || exit 1']
    volumes:
      - uploads:/mastodon/public/system

  shell:
    image: tootsuite/mastodon:v3.5.3
    env_file: 
      - application.env
      - database.env
    command: /bin/bash 
    restart: "no"
    networks:
      - internal_network
      - external_network
    volumes:
      - uploads:/mastodon/public/system
      - static:/static

  streaming:
    image: tootsuite/mastodon:v3.5.3
    env_file: 
      - application.env
      - database.env
    command: node ./streaming
    restart: always
    depends_on:
      - postgresql
#      - pgbouncer
      - redis
      - redis-volatile
      - elasticsearch
    ports:
      - '127.0.0.1:4000:4000'
    networks:
      - internal_network
      - external_network
    healthcheck:
      test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1']

  sidekiq:
    image: tootsuite/mastodon:v3.5.3
    env_file: 
      - application.env
      - database.env
    command: bundle exec sidekiq
    restart: always
    depends_on:
      - postgresql
#      - pgbouncer
      - redis
      - redis-volatile
      - website
    networks:
      - internal_network
      - external_network
    healthcheck:
      test: ['CMD-SHELL', "ps aux | grep '[s]idekiq\ 6' || false"]
    volumes:
      - uploads:/mastodon/public/system

networks:
  external_network:
  internal_network:
  #  internal: true

volumes:
  postgresql:
    driver_opts:
      type: none
      device: /opt/mastodon/database/postgresql
      o: bind    
  pgbackups:
    driver_opts:
      type: none
      device: /opt/mastodon/database/pgbackups
      o: bind    
  redis:
    driver_opts:
      type: none
      device: /opt/mastodon/database/redis
      o: bind    
  elasticsearch:
    driver_opts:
      type: none
      device: /opt/mastodon/database/elasticsearch
      o: bind    
  uploads:
    driver_opts:
      type: none
      device: /opt/mastodon/web/system
      o: bind    
  static:
    driver_opts:
      type: none
      device: /opt/mastodon/web/public
      o: bind    

(Note: I have not tested the pgbouncer configuration; that is copied from sleeplessbeastie’s guide, and I have left it commented out in case I later need to add pgbouncer to my configuration.)

You can choose whether to use major versions and auto-upgrade minor versions with docker compose pull, or manually change minor versions, for redis and postgresql. You will want to lock mastodon to a specific version, follow updates (ATOM), and observe update procedures called out in the release notes when upgrading Mastodon. There is no major version tag for Elasticsearch on docker, so regularly check Elastic Docker Hub especially for security updates to address Java security flaws as they are discovered.

Note: you cannot set the internal_network as an internal network and use firewalld. The default docker-compose.yml that comes with Mastodon sets:

networks:
  external_network:
  internal_network:
    internal: true

That internal: true doesn’t work with firewalld, which is why it is commented out in the docker-compose.yml here. If this is ever fixed, you may be able to re-add that additional restriction.

Choose or build image

All of the image: specifications in the docker-compose.yml file that start with tootsuite/ are the official builds, which you can use as-is.

However, it’s handy to have the source around, so I recommend you clone it.

# cd /opt/mastodon
# git clone https://github.com/mastodon/mastodon.git

If you are installing the Glitch-soc version of Mastodon, instead clone this repository:

# git clone https://github.com/glitch-soc/mastodon.git

Whichever version you install, this gives you an easy reference to all the files there, including the .env.production.sample file that you will want to reference when setting up your environment files. There are additional settings not included in this example application.yml file that you will want to consider for your deployment.

If you want to build an image from source, do this using a meaningful tag for the version you are actually using:

# cd /opt/mastodon/mastodon
# docker build -f Dockerfile --tag mastodon:v4.0.0 .

This will take a while depending on your hardware, but it could easily be 15 minutes. Having done that, modify the image: tootsuite/ lines in docker-compose.yml to instead reference image: mastodon:v4.0.0 (or whatever tag you provided at the time you built it.)

Pull images you did not build

Now download all the images required to compose your system.

# docker compose pull

Secrets and configuration

It’s time to fill in application.env and database.env. You will want to start with what’s in the current .env.production.sample file in the Mastodon source for the version you are running. But for clarity, here’s a template application.env file:

# environment
RAILS_ENV=production
NODE_ENV=production

# domain
LOCAL_DOMAIN=your.server.fqdn

# redirect to the first profile
SINGLE_USER_MODE=false

# do not serve static files
RAILS_SERVE_STATIC_FILES=false

# concurrency
WEB_CONCURRENCY=2
MAX_THREADS=5

# pgbouncer
#PREPARED_STATEMENTS=false

# locale
DEFAULT_LOCALE=en

# email, not used
SMTP_SERVER=mailserver.invalid
SMTP_PORT=587
SMTP_LOGIN=mastodon
SMTP_PASSWORD=ifYouNeedId
SMTP_FROM_ADDRESS=notifications-noreply@your.server.fqdn


# secrets
SECRET_KEY_BASE=add
OTP_SECRET=add

# Changing VAPID keys will break push notifications
VAPID_PRIVATE_KEY=add
VAPID_PUBLIC_KEY=add

To generate values for SECRET_KEY_BASE and OTP_SECRET run this twice to generate two different keys:

# docker compose run --rm shell bundle exec rake secret

To generate values for VAPID_PRIVATE_KEY and VAPID_PUBLIC_KEY:

# docker compose run --rm shell bundle exec rake mastodon:webpush:generate_vapid_key

Here’s a template database.env file:

# postgresql configuration
POSTGRES_USER=mastodon
POSTGRES_DB=mastodon
POSTGRES_PASSWORD=generate1
PGPASSWORD=generate1
PGPORT=5432
PGHOST=postgresql
PGUSER=mastodon

# pgbouncer configuration
#POOL_MODE=transaction
#ADMIN_USERS=postgres,mastodon
#DATABASE_URL="postgres://mastodon:generate1@postgresql:5432/mastodon"

# elasticsearch
ES_JAVA_OPTS=-Xms512m -Xmx512m
ELASTIC_PASSWORD=generate2

# mastodon database configuration
#DB_HOST=pgbouncer
DB_HOST=postgresql
DB_USER=mastodon
DB_NAME=mastodon
DB_PASS=generate1
DB_PORT=5432

REDIS_HOST=redis
REDIS_PORT=6379

CACHE_REDIS_HOST=redis-volatile
CACHE_REDIS_PORT=6379

ES_ENABLED=true
ES_HOST=elasticsearch
ES_PORT=9200
ES_USER=elastic
ES_PASS=generate2

To generate keys for the generate1 and generate2 values, use this twice:

# openssl rand -base64 15

Bring up

With the environment files filled with secrets and keys, initialize those services. First get the static files ready to be served directly by nginx on the host:

# docker compose run --rm shell bash -c "cp -r /opt/mastodon/public/* /static/"

Then bring up the data layer.

# docker compose up -d postgresql redis redis-volatile
# watch docker compose ps

Wait for running (healthy), then Control-C and initialize the database.

# docker compose run --rm shell bundle exec rake db:setup

Note that later, after each mastodon update, you will need to run all database migrations, and update your copies of static files:

# docker compose run --rm shell bundle exec rake db:migrate
# docker compose run --rm shell bash -c "cp -r /opt/mastodon/public/* /static/"

Handle HTTPS

Install certbot and the nginx plugin for certbot.

# dnf config-manager --set-enabled crb
# dnf install epel-release
# dnf install certbot python3-certbot-nginx

Get a key. You’ll have to answer a bunch of questions. Make sure you include the FQDN of the server!

# certbot --nginx

The plugin may have started nginx without stopping it:

# killall nginx

Now copy dist/nginx.conf to /etc/nginx/conf.d/mastodon.conf, then

  • change example.com everywhere to the fully-qualified domain name for your server
  • uncomment the ssl_certificate and ssl_certificate_key lines

Comment out the default server block from /etc/nginx/nginx.conf because it conflicts with the https redirect in mastodon.conf:

#    server {
#        listen       80;
#        listen       [::]:80;
#        server_name  _;
#        root         /usr/share/nginx/html;
#
#        # Load configuration files for the default server block.
#        include /etc/nginx/default.d/*.conf;
#
#        error_page 404 /404.html;
#        location = /404.html {
#        }
#
#        error_page 500 502 503 504 /50x.html;
#        location = /50x.html {
#        }
#    }

Now enable and start nginx and the certbot renewal:

# systemctl enable --now nginx.service
# systemctl enable --now certbot-renew.timer

Tootctl

The tootctl CLI tool is essential for administering Mastodon. Make it accessible via the Mastodon shell docker image with a simple shell script.

Contents of /usr/local/bin/tootctl

#!/bin/bash
docker compose -f /opt/mastodon/docker-compose.yml run --rm shell tootctl "$@"

Then:

# chmod +x /usr/local/bin/tootctl

Start mastodon

More systemd unit files!

Contents of /etc/systemd/system/mastodon.service

[Unit]
Description=Mastodon service
After=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
StandardError=/var/log/mastodon.err
StandardOutput=/var/log/mastodon.out

WorkingDirectory=/opt/mastodon
ExecStart=/usr/bin/docker compose -f /opt/mastodon/docker-compose.yml up -d
ExecStop=/usr/bin/docker compose -f /opt/mastodon/docker-compose.yml down

[Install]
WantedBy=multi-user.target

Then run

# systemctl daemon-reload
# systemctl enable --now mastodon.service
# watch docker compose -f /opt/mastodon/docker-compose.yml ps

Wait for running (healthy) before the next step. This will probably take 30 seconds to a minute.

Create admin user

# tootctl accounts create $admin-user --email admin-user@mail.invalid --confirmed --role Admin

Create new secure password, then disable registration during setup.

# tootctl settings registrations close

If you later want to open registrations, go ahead whenever you want to.

# tootctl settings registrations open

Cleanup

Contents of /etc/systemd/system/mastodon-media-remove.service

[Unit]
Description=Mastodon - media remove service
Wants=mastodon-media-remove.timer

[Service]
Type=oneshot
StandardError=/var/log/mastodon-media-remove.err
StandardOutput=/var/log/mastodon-media-remove.out

WorkingDirectory=/opt/mastodon
ExecStart=/usr/bin/docker compose -f /opt/mastodon/docker-compose.yml run --rm shell tootctl media remove

[Install]
WantedBy=multi-user.target

Contents of /etc/systemd/system/mastodon-media-remove.timer

[Unit]
Description=Schedule a media remove every week

[Timer]
Persistent=true
OnCalendar=Sat *-*-* 00:00:00
Unit=mastodon-media-remove.service

[Install]
WantedBy=timers.target

Contents of /etc/systemd/system/mastodon-preview_cards-remove.service

[Unit]
Description=Mastodon - preview cards remove service
Wants=mastodon-preview_cards-remove.timer

[Service]
Type=oneshot
StandardError=/var/log/mastodon-preview-remove.err
StandardOutput=/var/log/mastodon-preview-remove.out

WorkingDirectory=/opt/mastodon
ExecStart=/usr/bin/docker compose -f /opt/mastodon/docker-compose.yml run --rm shell tootctl preview_cards remove

[Install]
WantedBy=multi-user.target

Contents of /etc/systemd/system/mastodon-preview_cards-remove.timer

[Unit]
Description=Schedule a preview cards remove every week

[Timer]
Persistent=true
OnCalendar=Sat *-*-* 00:00:00
Unit=mastodon-preview_cards-remove.service

[Install]
WantedBy=timers.target

Now start them up

# systemctl daemon-reload
# systemctl enable --now mastodon-preview_cards-remove.timer
# systemctl enable --now mastodon-media-remove.timer

Backups

I chose to use restic. Use what you want, but here’s an easy-to-apply pattern using restic.

Contents of /opt/mastodon/backup-files

/etc/nginx
/etc/letsencrypt
/etc/systemd/system
/etc/fail2ban
/root
/opt/mastodon/database/pgbackups
/opt/mastodon/*.env
/opt/mastodon/docker-compose.yml
/opt/mastodon/database/redis
/opt/mastodon/web/system
/opt/mastodon/backup-files
/opt/mastodon/mastodon-backup
/var/lib/rpm
/usr/local/bin

Now run:

# dnf install restic

Contents of /etc/systemd/system/mastodon-backup.timer

[Unit]
Description=Schedule a mastodon backup every hour

[Timer]
Persistent=true
OnCalendar=*:00:00
Unit=mastodon-backup.service

[Install]
WantedBy=timers.target

Contents of /etc/systemd/system/mastodon-backup.service

[Unit]
Description=Mastodon - backup service
# Without this, they can run at the same time and race to docker compose,
# double-creating networks and failing due to ambiguous network definition
# requiring `docker network prune` and restarting
After=mastodon.service

[Service]
Type=oneshot
StandardError=file:/var/log/mastodon-backup.err
StandardOutput=file:/var/log/mastodon-backup.log

WorkingDirectory=/opt/mastodon
ExecStart=/bin/bash /opt/mastodon/mastodon-backup

[Install]
WantedBy=multi-user.target

Contents of /root/backup-configuration will need some pieces filled in. Do keep in mind that you also need to store the restic password somewhere safe; if it’s only on this system and no where else, your backup is no better than a pile of random data. It’s important to start this after mastodon.service because otherwise mastodon and mastodon-backup will race to create the docker networks, will both succeed in creating the networks,and then be very confused by the duplicate networks. If this ever happens to you, docker network prune will remove duplicate networks and you can start over.

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
SERVER=
PORT=
BUCKET=
RESTIC_PASSWORD_FILE=/root/restic-pasword

/opt/mastodon/backup-init

#!/bin/bash
set -e
. /root/backup-configuration
restic -r s3:https://$SERVER:$PORT/$BUCKET init

/opt/mastodon/mastodon-backup

#!/bin/bash
set -e
. /root/backup-configuration

docker compose -f /opt/mastodon/docker-compose.yml run --rm postgresql sh -c "pg_dump -Fp  mastodon | gzip > /backups/dump.sql.gz"
restic -r s3:https://$SERVER:$PORT/$BUCKET --cache-dir=/root backup $(cat /opt/mastodon/backup-files) --exclude  /opt/mastodon/database/postgresql
restic -r s3:https://$SERVER:$PORT/$BUCKET --cache-dir=/root forget --prune --keep-hourly 24 --keep-daily 7 --keep-monthly 3

Now start them up

# chmod +x /opt/mastodon/mastodon-backup /opt/mastodon/backup-init
# /opt/mastodon/backup-init
# systemctl daemon-reload
# systemctl enable --now mastodon-backup.service
# systemctl enable --now mastodon-backup.timer

Confirm that hourly backups are happening and accessible using

# restic -r s3:https://$SERVER:$PORT/$BUCKET snapshots
# restic -r s3:https://$SERVER:$PORT/$BUCKET mount /mnt

Search will (as of this writing) fail to deploy until at least one local toot has been made. After the first toot on your server, initialize search:

# tootctl search deploy

Questions?

I’ll try to improve this post to address your questions if I know the answers. Feel free to reach out by comments on Mastodon.