Deploying Mastodon on CentOS 9 and derivatives with podman
Warning: May 2023 I have given up on using podman due to multiple permissions failures with SELinux that I never resolved. I have moved to only runnning Mastodon on docker.
This now documents what I did that did not work.
After setting up Maker Forums Social runnning Mastodon on docker, I volunteered to deploy Aviating.com Social as well. This time, instead of using docker, I built on podman.
Warning: 5 February 2023 Updates are now attempting to replace
runc
with Docker’s containerd.io
package. Using docker-compose
with podman looks like a bad idea at this point. This document now
serves as a document of how I have deployed a system, but I will
shortly be working on a new deployment strategy, based either on
podman generate systemd
to generate individual systemd unit files and
adding dependencies, or
podman generate/play kube
to create a Kubernetes yaml file and then run it.
System installation and preparation follow my earlier guidelines, including firewall, sysctl, and SELinux configuration, and this post builds on that with similar instructions for using podman directly, making it easy to benefit from SELinux as well as the rest of the podman architecture.
Podman
I first tried to replace docker-compose with podman-compose. It took lots of failures before I gave up. I then found that you can use docker-compose with podman now. At the time of writing, docker-compose is not available in EPEL, so we instead use the docker-compose plugin and create a link. Then we have to enable the socket that docker-compose uses to manage podman. Finally, we want git available to have the Mastodon sources handy.
# dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# dnf install docker-compose-plugin
# ln -s /usr/libexec/docker/cli-plugins/docker-compose /usr/local/bin/docker-compose
# systemctl enable --now podman.socket
# dnf install git
Application
Now you’re finally ready to start installing Mastodon itself. Make room for Mastadon to live in:
# mkdir -p /opt/mastodon/database/{postgresql,pgbackups,redis,elasticsearch}
# mkdir -p /opt/mastodon/web/{public,system,static}
# chown 991:991 /opt/mastodon/web/{public,system,static}
# chown 1000 /opt/mastodon/database/elasticsearch
# chown 70:70 /opt/mastodon/database/pgbackups
# cd /opt/mastodon
# touch application.env database.env
# semanage fcontext -a -t httpd_sys_content_t /opt/mastodon/web/public
# restorecon -R -v /opt/mastodon/web/public
Create /opt/mastodon/docker-compose.yml
version: '3'
services:
postgresql:
image: postgres:14-alpine
env_file: database.env
restart: always
shm_size: 512mb
healthcheck:
test: ['CMD', 'pg_isready', '-U', 'postgres']
volumes:
- /opt/mastodon/database/postgresql:/var/lib/postgresql/data:z
- /opt/mastodon/database/pgbackups:/backups:z
networks:
- internal_network
# pgbouncer:
# image: edoburu/pgbouncer:1.12.0
# env_file: database.env
# depends_on:
# - postgresql
# healthcheck:
# test: ['CMD', 'pg_isready', '-h', 'localhost']
# networks:
# - internal_network
redis:
image: redis:7-alpine
restart: always
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
volumes:
- /opt/mastodon/database/redis:/data:z
networks:
- internal_network
redis-volatile:
image: redis:7-alpine
restart: always
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
networks:
- internal_network
elasticsearch:
image: elasticsearch:7.17.4
restart: always
env_file: database.env
environment:
- cluster.name=elasticsearch-mastodon
- discovery.type=single-node
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- ingest.geoip.downloader.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test: ["CMD-SHELL", "nc -z elasticsearch 9200"]
volumes:
- /opt/mastodon/database/elasticsearch:/usr/share/elasticsearch/data:z
networks:
- internal_network
website:
#image: localhost/mastodon:v4.0.0
image: tootsuite/mastodon:v4.0.0
env_file:
- application.env
- database.env
command: bash -c "bundle exec rails s -p 3000"
restart: always
depends_on:
- postgresql
# - pgbouncer
- redis
- redis-volatile
- elasticsearch
ports:
- '127.0.0.1:3000:3000'
networks:
- internal_network
- external_network
healthcheck:
test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:3000/health || exit 1']
volumes:
- /opt/mastodon/web/system:/mastodon/public/system:z
shell:
#image: localhost/mastodon:v4.0.0
image: tootsuite/mastodon:v4.0.0
env_file:
- application.env
- database.env
command: /bin/bash
restart: "no"
networks:
- internal_network
- external_network
volumes:
- /opt/mastodon/web/system:/mastodon/public/system:z
- /opt/mastodon/web/static:/static:z
streaming:
#image: localhost/mastodon:v4.0.0
image: tootsuite/mastodon:v4.0.0
env_file:
- application.env
- database.env
command: node ./streaming
restart: always
depends_on:
- postgresql
# - pgbouncer
- redis
- redis-volatile
- elasticsearch
ports:
- '127.0.0.1:4000:4000'
networks:
- internal_network
- external_network
healthcheck:
test: ['CMD-SHELL', 'wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1']
sidekiq:
#image: localhost/mastodon:v4.0.0
image: tootsuite/mastodon:v4.0.0
env_file:
- application.env
- database.env
command: bundle exec sidekiq
restart: always
depends_on:
- postgresql
# - pgbouncer
- redis
- redis-volatile
- website
networks:
- internal_network
- external_network
healthcheck:
test: ['CMD-SHELL', "ps aux | grep '[s]idekiq\ 6' || false"]
volumes:
- /opt/mastodon/web/system:/mastodon/public/system:z
networks:
external_network:
internal_network:
# internal: true
(Note: I have not tested the pgbouncer configuration; that is copied from sleeplessbeastie’s guide, and I have left it commented out in case I later need to add pgbouncer to my configuration.)
You can choose whether to use major versions and auto-upgrade minor versions with docker-compose pull
,
or manually change minor versions, for redis and postgresql.
You will want to lock mastodon to a specific version, follow updates (ATOM), and observe update procedures called out in the release notes when upgrading Mastodon. There is no major version tag for Elasticsearch on docker, so regularly check Elastic Docker Hub especially for security updates to address Java security flaws as they are discovered.
Note: you cannot set the internal_network
as an internal network and use firewalld.
The default docker-compose.yml that comes with Mastodon sets:
networks:
external_network:
internal_network:
internal: true
That internal: true
doesn’t work with firewalld,
which is why it is commented out in the docker-compose.yml here. If this is ever fixed, you may
be able to re-add that additional restriction.
Choose or build image
All of the image:
specifications in the docker-compose.yml
file that start with tootsuite/
are the official builds, which you can use as-is.
However, it’s handy to have the source around, so I recommend you clone it.
# cd /opt/mastodon
# git clone https://github.com/mastodon/mastodon.git
If you are installing the Glitch-soc version of Mastodon, instead clone this repository:
# git clone https://github.com/glitch-soc/mastodon.git
Whichever version you install, this gives you an easy reference to
all the files there, including the .env.production.sample
file that
you will want to reference when setting up your environment files.
There are additional settings not included in this example
application.yml
file that you will want to consider for your
deployment.
If you want to build an image from source, do this using a meaningful tag for the version you are actually using:
# cd /opt/mastodon/mastodon
# podman build --format docker -f Dockerfile --tag mastodon:v4.0.0
This will take a while depending on your hardware, but it could easily be 15 minutes.
Having done that, modify the image: tootsuite/
lines in docker-compose.yml
to
instead reference image: localhost/mastodon:v4.0.0
(or whatever tag you provided
at the time you built it.)
Pull images you did not build
Now download all the images required to compose your system.
# docker-compose pull
You’ll have to choose a registry source. I chose docker.io/*
everywhere.
Secrets and configuration
It’s time to fill in application.env and database.env. You will want
to start with what’s in the current .env.production.sample
file
in the Mastodon source for the version you are running. But for
clarity, here’s a template application.env
file:
# environment
RAILS_ENV=production
NODE_ENV=production
# domain
LOCAL_DOMAIN=your.server.fqdn
# redirect to the first profile
SINGLE_USER_MODE=false
# do not serve static files
RAILS_SERVE_STATIC_FILES=false
# concurrency
WEB_CONCURRENCY=2
MAX_THREADS=5
# pgbouncer
#PREPARED_STATEMENTS=false
# locale
DEFAULT_LOCALE=en
# email, not used
SMTP_SERVER=mailserver.invalid
SMTP_PORT=587
SMTP_LOGIN=mastodon
SMTP_PASSWORD=ifYouNeedId
SMTP_FROM_ADDRESS=notifications-noreply@your.server.fqdn
# secrets
SECRET_KEY_BASE=add
OTP_SECRET=add
# Changing VAPID keys will break push notifications
VAPID_PRIVATE_KEY=add
VAPID_PUBLIC_KEY=add
To generate values for SECRET_KEY_BASE
and OTP_SECRET
run this twice
to generate two different keys:
# docker-compose run --rm shell bundle exec rake secret
To generate values for VAPID_PRIVATE_KEY
and VAPID_PUBLIC_KEY
:
# docker-compose run --rm shell bundle exec rake mastodon:webpush:generate_vapid_key
Here’s a template database.env
file:
# postgresql configuration
POSTGRES_USER=mastodon
POSTGRES_DB=mastodon
POSTGRES_PASSWORD=generate1
PGPASSWORD=generate1
PGPORT=5432
PGHOST=postgresql
PGUSER=mastodon
# pgbouncer configuration
#POOL_MODE=transaction
#ADMIN_USERS=postgres,mastodon
#DATABASE_URL="postgres://mastodon:generate1@postgresql:5432/mastodon"
# elasticsearch
ES_JAVA_OPTS=-Xms512m -Xmx512m
ELASTIC_PASSWORD=generate2
# mastodon database configuration
#DB_HOST=pgbouncer
DB_HOST=postgresql
DB_USER=mastodon
DB_NAME=mastodon
DB_PASS=generate1
DB_PORT=5432
REDIS_HOST=redis
REDIS_PORT=6379
CACHE_REDIS_HOST=redis-volatile
CACHE_REDIS_PORT=6379
ES_ENABLED=true
ES_HOST=elasticsearch
ES_PORT=9200
ES_USER=elastic
ES_PASS=generate2
To generate keys for the generate1
and generate2
values, use
this twice:
# openssl rand -base64 15
Bring up
With the environment files filled with secrets and keys, initialize those services. First get the static files ready to be served directly by nginx on the host. The two-stage copy is because SELinux appropriately prevents the container from writing directly to the final location, but cp outside the container is able to copy the files to where nginx will be able to see them.
# docker-compose run --rm shell bash -c "cp -r /opt/mastodon/public/* /static/"
# unalias cp
# cp -rf web/static/* web/public
Then bring up the data layer.
# docker-compose up -d postgresql redis redis-volatile
# watch podman ps
Wait for running (healthy)
, then Control-C
and initialize the database.
# docker-compose run --rm shell bundle exec rake db:setup
Note that later, after each mastodon update, you will need to run all database migrations, and update your copies of static files.
# docker-compose run --rm shell bundle exec rake db:migrate
# docker-compose run --rm shell bash -c "cp -r /opt/mastodon/public/* /static/"
# docker-compose -f docker-compose.yml run --rm shell bash -c "cp -r /opt/mastodon/public/* /static/"
# unalias cp
# cp -rf web/static/* web/public
Some updates require additional migration steps. Read the release notes.
Handle HTTPS
Install certbot and the nginx plugin for certbot.
# dnf config-manager --set-enabled crb
# dnf install epel-release
# dnf install certbot python3-certbot-nginx
Get a key. You’ll have to answer a bunch of questions. Make sure you include the FQDN of the server!
# certbot --nginx
The plugin may have started nginx without stopping it:
# killall nginx
Now create /etc/nginx/conf.d/mastodon.conf
and change
mastodon.example.com
everywhere to the fully-qualified domain name for your server:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream backend {
server 127.0.0.1:3000 fail_timeout=0;
}
upstream streaming {
server 127.0.0.1:4000 fail_timeout=0;
}
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=CACHE:10m inactive=7d max_size=1g;
server {
listen 80;
server_name mastodon.example.com;
location / { return 301 https://$host$request_uri; }
}
server {
listen 443 ssl http2;
server_name mastodon.example.com;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_certificate /etc/letsencrypt/live/mastodon.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mastodon.example.com/privkey.pem;
keepalive_timeout 70;
sendfile on;
client_max_body_size 80m;
root /opt/mastodon/web/public;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml image/x-icon;
add_header Strict-Transport-Security "max-age=31536000" always;
location / {
try_files $uri @proxy;
}
location ~ ^/(system/accounts/avatars|system/media_attachments/files) {
add_header Cache-Control "public, max-age=31536000, immutable";
add_header Strict-Transport-Security "max-age=31536000" always;
# SELinux does not allow http and container access to these files together
# to fix this, set up an nginx container for static files and
# proxy to that container for static files
#root /opt/mastodon/web/;
try_files $uri @proxy;
}
location ~ ^/(emoji|packs) {
add_header Cache-Control "public, max-age=31536000, immutable";
add_header Strict-Transport-Security "max-age=31536000" always;
try_files $uri @proxy;
}
location /sw.js {
add_header Cache-Control "public, max-age=0";
add_header Strict-Transport-Security "max-age=31536000" always;
try_files $uri @proxy;
}
location @proxy {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Proxy "";
proxy_pass_header Server;
proxy_pass http://backend;
proxy_buffering on;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_cache CACHE;
proxy_cache_valid 200 7d;
proxy_cache_valid 410 24h;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
add_header X-Cached $upstream_cache_status;
add_header Strict-Transport-Security "max-age=31536000" always;
tcp_nodelay on;
}
location /api/v1/streaming {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Proxy "";
proxy_pass http://streaming;
proxy_buffering off;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
tcp_nodelay on;
}
error_page 500 501 502 503 504 /500.html;
}
Comment out the default server block from /etc/nginx/nginx.conf because it conflicts with the https redirect in mastodon.conf:
# server {
# listen 80;
# listen [::]:80;
# server_name _;
# root /usr/share/nginx/html;
#
# # Load configuration files for the default server block.
# include /etc/nginx/default.d/*.conf;
#
# error_page 404 /404.html;
# location = /404.html {
# }
#
# error_page 500 502 503 504 /50x.html;
# location = /50x.html {
# }
# }
Now enable and start nginx and the certbot renewal:
# systemctl enable --now nginx.service
# systemctl enable --now certbot-renew.timer
Tootctl
The tootctl CLI tool is essential for administering Mastodon. Make it accessible via the Mastodon shell docker image with a simple shell script.
Contents of /usr/local/bin/tootctl
#!/bin/bash
docker-compose -f /opt/mastodon/docker-compose.yml run --rm shell tootctl "$@"
Then:
# chmod +x /usr/local/bin/tootctl
Start mastodon
More systemd unit files!
Contents of /etc/systemd/system/mastodon.service
[Unit]
Description=Mastodon service
After=podman.service
[Service]
Type=oneshot
RemainAfterExit=yes
StandardError=/var/log/mastodon.err
StandardOutput=/var/log/mastodon.out
WorkingDirectory=/opt/mastodon
ExecStart=/usr/local/bin/docker-compose -f /opt/mastodon/docker-compose.yml up -d
ExecStop=/usr/local/bin/docker-compose -f /opt/mastodon/docker-compose.yml down
[Install]
WantedBy=multi-user.target
Then run
# systemctl daemon-reload
# systemctl enable --now mastodon.service
# watch docker-compose -f /opt/mastodon/docker-compose.yml ps
Wait for running (healthy)
before the next step. This will probably take 30 seconds to a minute.
Create admin user
# tootctl accounts create $admin-user --email admin-user@mail.invalid --confirmed --role Admin
Create new secure password, then disable registration during setup.
# tootctl settings registrations close
If you later want to open registrations, go ahead whenever you want to.
# tootctl settings registrations open
Cleanup
Contents of /etc/systemd/system/mastodon-media-remove.service
[Unit]
Description=Mastodon - media remove service
Wants=mastodon-media-remove.timer
[Service]
Type=oneshot
StandardError=/var/log/mastodon-media-remove.err
StandardOutput=/var/log/mastodon-media-remove.out
WorkingDirectory=/opt/mastodon
ExecStart=/usr/local/bin/docker-compose -f /opt/mastodon/docker-compose.yml run --rm shell tootctl media remove
[Install]
WantedBy=multi-user.target
Contents of /etc/systemd/system/mastodon-media-remove.timer
[Unit]
Description=Schedule a media remove every week
[Timer]
Persistent=true
OnCalendar=Sat *-*-* 00:00:00
Unit=mastodon-media-remove.service
[Install]
WantedBy=timers.target
Contents of /etc/systemd/system/mastodon-preview_cards-remove.service
[Unit]
Description=Mastodon - preview cards remove service
Wants=mastodon-preview_cards-remove.timer
[Service]
Type=oneshot
StandardError=/var/log/mastodon-preview-remove.err
StandardOutput=/var/log/mastodon-preview-remove.out
WorkingDirectory=/opt/mastodon
ExecStart=/usr/local/bin/docker-compose -f /opt/mastodon/docker-compose.yml run --rm shell tootctl preview_cards remove
[Install]
WantedBy=multi-user.target
Contents of /etc/systemd/system/mastodon-preview_cards-remove.timer
[Unit]
Description=Schedule a preview cards remove every week
[Timer]
Persistent=true
OnCalendar=Sat *-*-* 00:00:00
Unit=mastodon-preview_cards-remove.service
[Install]
WantedBy=timers.target
Now start them up
# systemctl daemon-reload
# systemctl enable --now mastodon-preview_cards-remove.timer
# systemctl enable --now mastodon-media-remove.timer
Backups
I chose to use restic. Use what you want, but here’s an easy-to-apply pattern using restic.
Contents of /opt/mastodon/backup-files
/etc/nginx
/etc/letsencrypt
/etc/systemd/system
/etc/fail2ban
/root
/opt/mastodon/database/pgbackups
/opt/mastodon/*.env
/opt/mastodon/docker-compose.yml
/opt/mastodon/database/redis
/opt/mastodon/web/system
/opt/mastodon/backup-files
/opt/mastodon/mastodon-backup
/var/lib/rpm
/usr/local/bin
Now run:
# dnf install restic
Contents of /etc/systemd/system/mastodon-backup.timer
[Unit]
Description=Schedule a mastodon backup every hour
[Timer]
Persistent=true
OnCalendar=*:00:00
Unit=mastodon-backup.service
[Install]
WantedBy=timers.target
Contents of /etc/systemd/system/mastodon-backup.service
[Unit]
Description=Mastodon - backup service
# Without this, they can run at the same time and race to docker-compose,
# double-creating networks and failing due to ambiguous network definition
# requiring `docker network prune` and restarting
After=mastodon.service
[Service]
Type=oneshot
StandardError=file:/var/log/mastodon-backup.err
StandardOutput=file:/var/log/mastodon-backup.log
WorkingDirectory=/opt/mastodon
ExecStart=/bin/bash /opt/mastodon/mastodon-backup
[Install]
WantedBy=multi-user.target
Contents of /root/backup-configuration
will need some pieces filled in.
Do keep in mind that you also need to store the restic password somewhere
safe; if it’s only on this system and no where else, your backup is no better
than a pile of random data. It’s important to start this after mastodon.service
because otherwise mastodon and mastodon-backup will race to create the docker
networks, will both succeed in creating the networks,and then be very confused
by the duplicate networks. If this ever happens to you, docker network prune
will remove duplicate networks and you can start over.
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
SERVER=
PORT=
BUCKET=
RESTIC_PASSWORD_FILE=/root/restic-pasword
/opt/mastodon/backup-init
#!/bin/bash
set -e
. /root/backup-configuration
restic -r s3:https://$SERVER:$PORT/$BUCKET init
/opt/mastodon/mastodon-backup
#!/bin/bash
set -e
. /root/backup-configuration
docker-compose -f /opt/mastodon/docker-compose.yml run --rm postgresql sh -c "pg_dump -Fp mastodon | gzip > /backups/dump.sql.gz"
restic -r s3:https://$SERVER:$PORT/$BUCKET --cache-dir=/root backup $(cat /opt/mastodon/backup-files) --exclude /opt/mastodon/database/postgresql
restic -r s3:https://$SERVER:$PORT/$BUCKET --cache-dir=/root forget --prune --keep-hourly 24 --keep-daily 7 --keep-monthly 3
Now start them up
# chmod +x /opt/mastodon/mastodon-backup /opt/mastodon/backup-init
# /opt/mastodon/backup-init
# systemctl daemon-reload
# systemctl enable --now mastodon-backup.service
# systemctl enable --now mastodon-backup.timer
Confirm that hourly backups are happening and accessible using
# restic -r s3:https://$SERVER:$PORT/$BUCKET snapshots
# restic -r s3:https://$SERVER:$PORT/$BUCKET mount /mnt
Search
Search will (as of this writing) fail to deploy until at least one local toot has been made. After the first toot on your server, initialize search:
# tootctl search deploy
Questions?
I’ll try to improve this post to address your questions if I know the answers. Feel free to reach out by comments on Mastodon.