Compare commits

..

2 Commits

4 changed files with 753 additions and 596 deletions

403
README.md
View File

@@ -1,113 +1,340 @@
# app-backup (Rocky Linux / Apache / Postfix) # app-backup (Rocky Linux 10) Split-Backups für WordPress, Nextcloud, Gitea + DBs (OneDrive/rclone)
Resiliente tägliche Backups für **WordPress**, **Nextcloud** und **Mail** (Postfix/Dovecot optional). Dieses Repo enthält ein **Backup- und Restore-Setup** für typische Self-Hosted-Apps auf Rocky Linux 10:
Erzeugt **timestamped Archive**, lädt nach **OneDrive** via **rclone** und verschickt einen **Mail-Report** an den lokalen User **johannes**.
Enthält zusätzlich ein Restore-Script mit **Dry-Run**, **Service-Stop/Start** und **Nextcloud Maintenance Mode**. - **WordPress** (Webroot)
- **Nextcloud** (Code + Data getrennt; Data liegt bei uns unter `.../nextcloud/data`)
- **Gitea** (native Installation via systemd + MariaDB)
- **MariaDB-Dumps** (WordPress DB, Nextcloud DB, Gitea DB)
- Upload nach **OneDrive** per **rclone**
## Installation Neu in dieser Version: **kein riesiges “Alles-in-eins”-Archiv** mehr, sondern **sauber getrennte Archives pro Komponente**. Das macht Uploads stabiler (große Dateien), Restore selektiver und Fehler leichter eingrenzbar.
Pakete: ---
```bash
sudo dnf install -y rsync tar zstd gzip rclone mariadb postfix ## ✅ Ziele / Designentscheidungen
### 1) Split statt „dickes Archiv“
Es werden pro Lauf mehrere Archive erzeugt (zstd oder gzip):
- `..._meta.tar.zst` Meta (Timestamp, Hostname, Größeninfos)
- `..._db.tar.zst` DB Dumps (SQL Dateien)
- `..._wordpress.tar.zst` WordPress Webdaten (ohne Nextcloud-Unterordner)
- `..._nextcloud.tar.zst` Nextcloud **Code/Web** (mit Exclude `data/`)
- `..._nextcloud-data.tar.zst` Nextcloud **Data** separat
- `..._gitea.tar.zst` Gitea Data (APP_DATA_PATH)
- `..._gitea-etc.tar.zst` `/etc/gitea` (Gitea config)
Vorteile:
- Upload stabiler bei **1218 GB** (nicht 1 Monolith)
- Restore selektiv (z.B. nur DB, nur Nextcloud-data, …)
- Fehlerbehebung leichter (ein Archiv kaputt → nicht alles kaputt)
### 2) Nextcloud data unterhalb von `NC_DIR`
Bei uns liegt Nextcloud so:
- `NC_DIR=/var/www/html/nextcloud` (Code)
- `NC_DATA_DIR=/var/www/html/nextcloud/data` (Data)
Damit **Data nicht doppelt** gesichert wird:
- Beim Backup von `NC_DIR` wird `data/` **explizit ausgeschlossen** (`rsync --exclude data/`).
- Danach wird `NC_DATA_DIR` als eigenes Paket gesichert.
### 3) WordPress liegt im Webroot `/var/www/html` und Nextcloud als Unterordner
Bei uns liegt:
- WordPress im `WP_DIR=/var/www/html`
- Nextcloud in `/var/www/html/nextcloud`
Damit Nextcloud nicht im WP-Archiv landet:
- WP-Backup schließt `nextcloud/` automatisch aus, wenn `NC_DIR` genau `WP_DIR/nextcloud` ist.
### 4) rclone / OneDrive robust für große Dateien
Upload wird robust gemacht mit:
- Chunking: `--onedrive-chunk-size 64M`
- Timeouts: `--timeout 1h`, `--contimeout 30s`
- konservativ: `--transfers 2`, `--checkers 4`
- mehr Retries: `--retries 10`, `--low-level-retries 40`
Außerdem: **Remote-Namen sind case-sensitive!**
Remote ist bei uns z.B. `OneDrive:` (nicht `onedrive:`).
---
## 📦 Dateien & Pfade
### Skripte
- `app-backup.sh` → Backup erstellen, Archive erzeugen, optional Upload.
- `app-restore.sh` → Restore aus lokalem Ordner oder OneDrive-Run-Folder.
- `app-backup.conf` → Konfiguration (Pfad/Services/DB/Credentials).
### Zielpfade (Standard)
- Workdir: `/var/backups/app-backup`
- Archive: `/var/backups/app-backup/archives`
- Logs: `/var/log/app-backup`
### OneDrive Ziel (Beispiel)
Upload in:
```
OneDrive:Sicherung/JRITServerBackups/<hostname>/appbackup_<timestamp>/
``` ```
Repo deploy: ---
## 🔧 Installation / Setup
### 1) Skripte installieren
Beispiel:
```bash ```bash
./install.sh sudo install -d /etc/app-backup /usr/local/sbin
sudo install -m 0755 app-backup.sh /usr/local/sbin/app-backup.sh
sudo install -m 0755 app-restore.sh /usr/local/sbin/app-restore.sh
sudo install -m 0640 app-backup.conf /etc/app-backup/app-backup.conf
``` ```
## Konfiguration ### 2) DB Credential Files anlegen
Für WordPress / Nextcloud / Gitea je eine `.cnf`.
Konfig anpassen: Beispiele:
- `/etc/app-backup/app-backup.conf` - `/etc/app-backup/db-wordpress.cnf`
- `/etc/app-backup/db-nextcloud.cnf`
- `/etc/app-backup/db-gitea.cnf`
DB-Credentials (aus Templates): Inhalt:
```ini
[client]
user=backup
password=DEIN_PASSWORT
host=localhost
```
Rechte:
```bash ```bash
sudo cp /etc/app-backup/db-wordpress.cnf.example /etc/app-backup/db-wordpress.cnf
sudo cp /etc/app-backup/db-nextcloud.cnf.example /etc/app-backup/db-nextcloud.cnf
sudo chmod 600 /etc/app-backup/db-*.cnf
sudo chown root:root /etc/app-backup/db-*.cnf sudo chown root:root /etc/app-backup/db-*.cnf
sudo chmod 600 /etc/app-backup/db-*.cnf
``` ```
rclone für root testen: ### 3) MariaDB Backup-User (Minimal-Rechte)
Pro DB (z.B. gitea):
```sql
CREATE USER 'backup'@'localhost' IDENTIFIED BY 'DEIN_PASSWORT';
GRANT SELECT, SHOW VIEW, TRIGGER, EVENT, LOCK TABLES ON gitea.* TO 'backup'@'localhost';
FLUSH PRIVILEGES;
```
Optional: denselben User für wordpress/nextcloud verwenden, dann jeweils `GRANT ... ON wordpress.*` etc.
### 4) rclone Remote prüfen (Case-Sensitive!)
Auf dem Server (als derselbe User, der später rclone nutzt typischerweise root oder johannes):
```bash
rclone listremotes
rclone lsf "OneDrive:" --max-depth 1
rclone lsf "OneDrive:Sicherung" --max-depth 1
```
⚠️ Wichtig: Wenn das Backup als `root` läuft, muss `root` auch Zugriff auf die rclone config haben.
Typisch sind zwei Wege:
**Option A root nutzt eigene rclone config**
- `sudo rclone config` als root durchführen und Remote anlegen.
**Option B root nutzt Config von johannes**
- im Script/Service `RCLONE_CONFIG=/home/johannes/.config/rclone/rclone.conf` setzen
- oder in systemd Unit `Environment=RCLONE_CONFIG=...`
---
## ⚙️ Konfiguration: app-backup.conf (wichtige Stellen)
### OneDrive Remote Base
```bash
RCLONE_REMOTE_BASE="OneDrive:Sicherung/JRITServerBackups/$(hostname -s)"
```
Das Skript legt pro Run einen Unterordner an:
```bash
remote_run="${RCLONE_REMOTE_BASE}/appbackup_<timestamp>"
```
### WordPress / Nextcloud Layout (wichtig!)
```bash
WP_DIR="/var/www/html"
NC_DIR="/var/www/html/nextcloud"
NC_DATA_DIR="/var/www/html/nextcloud/data"
```
Damit wird:
- WordPress = alles in `/var/www/html` **ohne** `nextcloud/`
- Nextcloud Code = `/var/www/html/nextcloud` **ohne** `data/`
- Nextcloud Data = `/var/www/html/nextcloud/data` separat
### Gitea (native + MariaDB)
Aus deinem systemd/app.ini:
- `WorkingDirectory=/var/lib/gitea`
- `APP_DATA_PATH=/var/lib/gitea/data`
- config: `/etc/gitea/app.ini`
Daher:
```bash
ENABLE_GITEA="true"
GITEA_SERVICE_NAME="gitea"
ENABLE_GITEA_SERVICE_STOP="true"
GITEA_DATA_DIR="/var/lib/gitea/data"
GITEA_ETC_DIR="/etc/gitea"
GITEA_DB_NAME="gitea"
GITEA_DB_CNF="/etc/app-backup/db-gitea.cnf"
```
---
## ▶️ Backup ausführen
```bash
sudo /usr/local/sbin/app-backup.sh
```
Erwartetes Verhalten:
1) Lock `/run/app-backup.lock` verhindert Parallelruns
2) optional stoppt es Gitea kurz (konsistenter Snapshot)
3) Dumps: wordpress / nextcloud / gitea
4) rsync in staging
5) mehrere `.tar.zst` werden erzeugt
6) Upload pro Datei nach OneDrive (falls `ENABLE_UPLOAD=true`)
7) Remote retention (best effort)
8) lokale Retention löscht alte Archive
---
## ♻️ Restore ausführen
### Restore von OneDrive (Run-Ordnername angeben)
Du brauchst den Ordnernamen, z.B.:
`appbackup_2026-02-11_02-31-28`
Dann:
```bash
sudo /usr/local/sbin/app-restore.sh --remote-run appbackup_2026-02-11_02-31-28
```
### Restore aus lokalem Ordner
Wenn du die Archive lokal hast:
```bash
sudo /usr/local/sbin/app-restore.sh --local-run /var/backups/app-backup/archives/<run-folder>
```
### Dry-Run
```bash
sudo /usr/local/sbin/app-restore.sh --remote-run appbackup_... --dry-run
```
### Nur Files / nur DB
```bash
sudo /usr/local/sbin/app-restore.sh --remote-run appbackup_... --no-db
sudo /usr/local/sbin/app-restore.sh --remote-run appbackup_... --no-files
```
---
## 🧠 Restore-Details (was passiert danach?)
### Nextcloud
- Maintenance Mode wird optional aktiviert (konfigurierbar)
- Restore kopiert Code und Data getrennt zurück
- Danach:
- `occ maintenance:repair`
- optional `occ files:scan --all` (deaktiviert per default; kann lange dauern)
### Gitea
- Service wird optional gestoppt
- Data wird nach `GITEA_DATA_DIR` zurück synchronisiert
- `/etc/gitea` wird wiederhergestellt
- DB Dump wird importiert
- Service wird wieder gestartet
---
## 🧯 Troubleshooting
### 1) rclone Remote not reachable / “didn't find section in config file”
Typischer Fehler:
- Remote heißt `OneDrive:` aber Config nutzt `onedrive:`**case-sensitive**.
- Backup läuft als `root`, aber rclone config ist nur bei `johannes`.
Prüfen:
```bash ```bash
sudo rclone listremotes sudo rclone listremotes
sudo rclone lsd onedrive: sudo -u johannes rclone listremotes
``` ```
## Backup Timer Fix:
- Entweder rclone remote auch für root konfigurieren,
- oder `RCLONE_CONFIG` auf johannes Config zeigen lassen.
Timer aktivieren (macht install.sh bereits): ### 2) Nextcloud Data doppelt oder fehlt
Stellen prüfen:
- `NC_DIR` korrekt?
- `NC_DATA_DIR` korrekt?
- Das Skript excludet `data/` bei Nextcloud Code. Data wird separat gesichert.
### 3) Große Uploads brechen ab
Setze z.B.:
- `RCLONE_TRANSFERS=1`
- `RCLONE_CHECKERS=2`
- `RCLONE_ONEDRIVE_CHUNK_SIZE=32M`
- optional `RCLONE_BWLIMIT="20M"`
### 4) Restore soll “hart” spiegeln
Standard ist vorsichtig (kein delete). Wenn du wirklich Ziel-Verzeichnisse exakt spiegeln willst:
```bash ```bash
RESTORE_STRICT_DELETE="true"
```
⚠️ Achtung: Das kann Dateien löschen, die nicht im Backup sind.
---
## 🗓️ Optional: systemd Service + Timer (Beispiel)
> Nur als Beispiel wenn ihr mögt, legen wir das als Files ins Repo.
`/etc/systemd/system/app-backup.service`
```ini
[Unit]
Description=app-backup
After=network.target mariadb.service
Wants=mariadb.service
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/app-backup.sh
```
`/etc/systemd/system/app-backup.timer`
```ini
[Unit]
Description=Run app-backup daily
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
```
Aktivieren:
```bash
sudo systemctl daemon-reload
sudo systemctl enable --now app-backup.timer sudo systemctl enable --now app-backup.timer
sudo systemctl list-timers | grep app-backup
``` ```
Manueller Testlauf: ---
```bash
sudo systemctl start app-backup.service
journalctl -u app-backup.service -n 200 --no-pager
ls -la /var/log/app-backup/
ls -la /var/backups/app-backup/archives/
```
## Disk-Schutz / Retention (wichtig) ## ✅ Empfehlung: Restore-Test
Backups sind erst dann gut, wenn Restore getestet ist:
Um ein volllaufendes Dateisystem zu vermeiden, gilt: - Test-VM / Test-Host
- DB importieren
- **Lokale Archive werden maximal `LOCAL_RETENTION_DAYS` Tage aufbewahrt** (Standard: **7 Tage**) - Archive entpacken / rsync zurückspielen
- Zusätzlich wird vor und nach dem Staging geprüft, ob mindestens **`MIN_FREE_GB` GiB** frei sind (Standard: **10 GiB**). - Nextcloud startet, Login klappt, Files sichtbar
- Gelöschte Backups werden im **Mail-Report** aufgeführt (Anzahl + grob freigegebener Speicher). - WordPress Frontend/Backend ok
- Gitea WebUI ok, Repos vorhanden
Einstellungen in `/etc/app-backup/app-backup.conf`:
- `LOCAL_RETENTION_DAYS=7`
- `MIN_FREE_GB=10`
## Mail-Report
Versand per Postfix `/usr/sbin/sendmail` an `MAIL_TO` (Default: `johannes`).
Der Report enthält u.a.:
- Status (SUCCESS/FAIL), Laufzeit
- Was gesichert wurde + Größen
- Upload-Status (rclone)
- **Wie viele alte lokale Backups gelöscht wurden (Retention)**
## Restore (Wiederherstellung)
### Dry-Run (empfohlen)
```bash
sudo /usr/local/sbin/app-restore.sh --archive /var/backups/app-backup/archives/appbackup_YYYY-mm-dd_HH-MM-SS.tar.zst --dry-run
```
### Restore aus lokalem Archiv
```bash
sudo /usr/local/sbin/app-restore.sh --archive /var/backups/app-backup/archives/appbackup_YYYY-mm-dd_HH-MM-SS.tar.zst
```
### Restore direkt aus OneDrive (Download + Restore)
```bash
sudo /usr/local/sbin/app-restore.sh --remote-file appbackup_YYYY-mm-dd_HH-MM-SS.tar.zst
```
### Teil-Restore
Nur Nextcloud + DB:
```bash
sudo /usr/local/sbin/app-restore.sh --archive /path/to/archive.tar.zst --only nextcloud,nextcloud-data,db
```
Alles außer Mail:
```bash
sudo /usr/local/sbin/app-restore.sh --archive /path/to/archive.tar.zst --skip mail
```
### Nach dem Restore prüfen
```bash
systemctl status httpd postfix dovecot --no-pager
tail -n 200 /var/log/app-backup/app-restore_*.log
```
Optional (kann dauern):
```bash
sudo -u apache php /var/www/html/nextcloud/occ maintenance:repair
```

View File

@@ -1,51 +1,83 @@
# What to back up # app-backup.conf
ENABLE_WORDPRESS=true # -----------------------------------------------------------------------------
ENABLE_NEXTCLOUD=true # Konfiguration für app-backup.sh und app-restore.sh (Split-Archive + tuned rclone)
ENABLE_NEXTCLOUD_DATA=true # -----------------------------------------------------------------------------
ENABLE_MAIL=true
# Database dumps
ENABLE_DB_DUMPS=true
ENABLE_NEXTCLOUD_MAINTENANCE=true
# Paths (adjust if needed)
WP_DIR="/var/www/html/wordpress"
NC_DIR="/var/www/html/nextcloud"
NC_DATA_DIR="/var/www/nextcloud-data"
NC_OCC_USER="apache"
# Mail paths (adjust if needed)
MAIL_DIR="/var/vmail"
POSTFIX_DIR="/etc/postfix"
DOVECOT_DIR="/etc/dovecot"
# DB names (adjust)
WP_DB_NAME="wordpress"
NC_DB_NAME="nextcloud"
# DB credentials files (create from examples, chmod 600, root:root)
WP_DB_CNF="/etc/app-backup/db-wordpress.cnf"
NC_DB_CNF="/etc/app-backup/db-nextcloud.cnf"
# Working dirs
WORKDIR="/var/backups/app-backup" WORKDIR="/var/backups/app-backup"
STAGING_ROOT="${WORKDIR}/staging"
ARCHIVE_DIR="${WORKDIR}/archives"
ARCHIVE_PREFIX="appbackup"
# Disk protection / retention
LOCAL_RETENTION_DAYS=7 LOCAL_RETENTION_DAYS=7
MIN_FREE_GB=10 COMPRESSOR="zstd" # zstd|gzip
# Compression # rclone / OneDrive (case-sensitive Remote-Name!)
COMPRESSOR="zstd" RCLONE_BIN="rclone"
RCLONE_REMOTE_BASE="OneDrive:Sicherung/JRITServerBackups/$(hostname -s)"
ENABLE_UPLOAD="true"
# rclone destination # Large file robustness
RCLONE_REMOTE="onedrive:Sicherung" RCLONE_ONEDRIVE_CHUNK_SIZE="64M"
RCLONE_TIMEOUT="1h"
RCLONE_CONTIMEOUT="30s"
RCLONE_TRANSFERS="2"
RCLONE_CHECKERS="4"
# remote retention # Retry strategy
ENABLE_REMOTE_RETENTION=true RCLONE_RETRIES=10
RCLONE_LOW_LEVEL_RETRIES=40
RCLONE_RETRIES_SLEEP="30s"
RCLONE_STATS="1m"
RCLONE_BWLIMIT="0" # e.g. "20M"
ENABLE_REMOTE_RETENTION="true"
REMOTE_RETENTION_DAYS=30 REMOTE_RETENTION_DAYS=30
# mail reporting via postfix/sendmail MIN_FREE_GB=12
ENABLE_MAIL_REPORT=true NICE_LEVEL=10
IONICE_CLASS=2
IONICE_LEVEL=6
ENABLE_MAIL_REPORT="true"
MAIL_TO="johannes" MAIL_TO="johannes"
MAIL_FROM="app-backup@$(hostname -f 2>/dev/null || hostname)"
MAIL_SUBJECT_PREFIX="[app-backup]" MAIL_SUBJECT_PREFIX="[app-backup]"
MAIL_INCLUDE_LOG_TAIL_LINES=200 MAIL_INCLUDE_LOG_TAIL_LINES=200
# Components
ENABLE_WORDPRESS="true"
ENABLE_NEXTCLOUD="true"
ENABLE_NEXTCLOUD_DATA="true"
ENABLE_NEXTCLOUD_MAINTENANCE="true"
ENABLE_DB_DUMPS="true"
# WordPress
WP_DIR="/var/www/html"
WP_DB_NAME="wordpress"
WP_DB_CNF="/etc/app-backup/db-wordpress.cnf"
# Nextcloud
NC_DIR="/var/www/html/nextcloud"
NC_OCC_USER="apache"
NC_DB_NAME="nextcloud"
NC_DB_CNF="/etc/app-backup/db-nextcloud.cnf"
# Nextcloud data UNDER NC_DIR
NC_DATA_DIR="/var/www/html/nextcloud/data"
NC_FILES_SCAN_AFTER_RESTORE="false"
# Gitea (native install)
ENABLE_GITEA="true"
GITEA_SERVICE_NAME="gitea"
ENABLE_GITEA_SERVICE_STOP="true"
# According to your /etc/gitea/app.ini:
# APP_DATA_PATH = /var/lib/gitea/data
GITEA_DATA_DIR="/var/lib/gitea/data"
GITEA_ETC_DIR="/etc/gitea"
# Gitea MariaDB
GITEA_DB_NAME="gitea"
GITEA_DB_CNF="/etc/app-backup/db-gitea.cnf"
# Restore behavior
RESTORE_STRICT_DELETE="false"

View File

@@ -2,13 +2,19 @@
set -Eeuo pipefail set -Eeuo pipefail
umask 027 umask 027
# ==============================================================================
# app-backup.sh
# - Separate archives per component (db / wordpress / nextcloud-code / nextcloud-data / gitea)
# - rclone tuned for large files (OneDrive chunking + timeouts + conservative concurrency)
# - Nextcloud "data" excluded from code backup (layout: ${NC_DIR}/data)
# - Gitea native install (systemd), data path configurable (default: /var/lib/gitea/data)
# ==============================================================================
# ---------- Logging ---------- # ---------- Logging ----------
LOG_DIR="/var/log/app-backup" LOG_DIR="/var/log/app-backup"
mkdir -p "$LOG_DIR" mkdir -p "$LOG_DIR"
ts="$(date '+%Y-%m-%d_%H-%M-%S')" ts="$(date '+%Y-%m-%d_%H-%M-%S')"
LOG_FILE="${LOG_DIR}/app-backup_${ts}.log" LOG_FILE="${LOG_DIR}/app-backup_${ts}.log"
# Log to file + journald
exec > >(tee -a "$LOG_FILE" | systemd-cat -t app-backup -p info) 2>&1 exec > >(tee -a "$LOG_FILE" | systemd-cat -t app-backup -p info) 2>&1
# ---------- Config ---------- # ---------- Config ----------
@@ -28,113 +34,76 @@ fi
: "${LOCAL_RETENTION_DAYS:=7}" : "${LOCAL_RETENTION_DAYS:=7}"
: "${COMPRESSOR:=zstd}" # zstd|gzip : "${COMPRESSOR:=zstd}" # zstd|gzip
: "${RCLONE_REMOTE:=onedrive:Sicherung}" : "${ARCHIVE_PREFIX:=appbackup}" # file prefix
# rclone
: "${RCLONE_BIN:=rclone}" : "${RCLONE_BIN:=rclone}"
: "${RCLONE_RETRIES:=6}" : "${RCLONE_REMOTE_BASE:=OneDrive:Sicherung/JRITServerBackups/$(hostname -s)}" # remote folder
: "${RCLONE_LOW_LEVEL_RETRIES:=20}" : "${RCLONE_RETRIES:=10}"
: "${RCLONE_LOW_LEVEL_RETRIES:=40}"
: "${RCLONE_RETRIES_SLEEP:=30s}" : "${RCLONE_RETRIES_SLEEP:=30s}"
: "${RCLONE_STATS:=1m}" : "${RCLONE_STATS:=1m}"
: "${RCLONE_BWLIMIT:=0}" # "0" = no limit : "${RCLONE_BWLIMIT:=0}" # "0" = no limit
: "${ENABLE_UPLOAD:=true}"
# large-file stability (OneDrive)
: "${RCLONE_ONEDRIVE_CHUNK_SIZE:=64M}"
: "${RCLONE_TIMEOUT:=1h}"
: "${RCLONE_CONTIMEOUT:=30s}"
: "${RCLONE_TRANSFERS:=2}"
: "${RCLONE_CHECKERS:=4}"
: "${REMOTE_RETENTION_DAYS:=30}" : "${REMOTE_RETENTION_DAYS:=30}"
: "${ENABLE_REMOTE_RETENTION:=true}" : "${ENABLE_REMOTE_RETENTION:=true}"
# Disk-space safety: minimum free space required on the filesystem holding WORKDIR # Disk-space safety
: "${MIN_FREE_GB:=10}" : "${MIN_FREE_GB:=12}"
# Process niceness # Process niceness
: "${NICE_LEVEL:=10}" : "${NICE_LEVEL:=10}"
: "${IONICE_CLASS:=2}" # best-effort : "${IONICE_CLASS:=2}"
: "${IONICE_LEVEL:=6}" : "${IONICE_LEVEL:=6}"
# Mail reporting (local postfix via sendmail) # Mail reporting
: "${ENABLE_MAIL_REPORT:=true}" : "${ENABLE_MAIL_REPORT:=true}"
: "${MAIL_TO:=johannes}" # local mailbox : "${MAIL_TO:=johannes}"
: "${MAIL_FROM:=app-backup@$(hostname -f 2>/dev/null || hostname)}" : "${MAIL_FROM:=app-backup@$(hostname -f 2>/dev/null || hostname)}"
: "${MAIL_SUBJECT_PREFIX:=[app-backup]}" : "${MAIL_SUBJECT_PREFIX:=[app-backup]}"
: "${MAIL_INCLUDE_LOG_TAIL_LINES:=200}" : "${MAIL_INCLUDE_LOG_TAIL_LINES:=200}"
# OPTIONAL: allow wp excludes via config, e.g. WP_EXCLUDES=("nextcloud/" "foo/")
# If unset, we compute a safe default for your setup.
: "${WP_EXCLUDES_MODE:=auto}" # auto|manual
# ---------- State for report ---------- # ---------- State for report ----------
START_EPOCH="$(date +%s)" START_EPOCH="$(date +%s)"
STATUS="SUCCESS" STATUS="SUCCESS"
ERROR_SUMMARY="" ERROR_SUMMARY=""
RCLONE_STATUS="SKIPPED" RCLONE_STATUS="SKIPPED"
RCLONE_OUTPUT_FILE="" RCLONE_OUTPUT_FILE=""
ARCHIVE_FILE=""
SIZES_FILE="" SIZES_FILE=""
DELETED_LOCAL_COUNT=0
DELETED_LOCAL_BYTES=0
DELETED_LOCAL_LIST_FILE=""
# ---------- Helpers ---------- # ---------- Helpers ----------
die() { echo "ERROR: $*"; exit 1; } die() { echo "ERROR: $*"; exit 1; }
have() { command -v "$1" >/dev/null 2>&1; } have() { command -v "$1" >/dev/null 2>&1; }
human_bytes() { human_bytes() { local b="${1:-0}"; if have numfmt; then numfmt --to=iec-i --suffix=B "$b"; else echo "${b}B"; fi; }
local b="${1:-0}" bytes_of_path() { local p="$1"; [[ -e "$p" ]] && (du -sb "$p" 2>/dev/null | awk '{print $1}' || du -sB1 "$p" | awk '{print $1}') || echo 0; }
if have numfmt; then numfmt --to=iec-i --suffix=B "$b"; else echo "${b}B"; fi free_bytes_workdir_fs() { df -PB1 "$WORKDIR" | awk 'NR==2{print $4}'; }
}
bytes_of_path() { ensure_min_free_space() {
local p="$1" local min_bytes=$((MIN_FREE_GB * 1024 * 1024 * 1024))
if [[ -e "$p" ]]; then local avail; avail="$(free_bytes_workdir_fs)"
du -sb "$p" 2>/dev/null | awk '{print $1}' || du -sB1 "$p" | awk '{print $1}' echo "-- Free space on WORKDIR filesystem: $(human_bytes "$avail") (min required: ${MIN_FREE_GB}GiB)"
else [[ "$avail" -ge "$min_bytes" ]] || die "Not enough free space on WORKDIR filesystem (need >= ${MIN_FREE_GB}GiB)."
echo 0
fi
}
free_bytes_workdir_fs() {
# available bytes on filesystem that contains WORKDIR
df -PB1 "$WORKDIR" | awk 'NR==2{print $4}'
} }
cleanup_old_local_archives() { cleanup_old_local_archives() {
mkdir -p "$ARCHIVE_DIR" mkdir -p "$ARCHIVE_DIR"
local min_age_days="$LOCAL_RETENTION_DAYS" echo "-- Local retention: deleting archives older than ${LOCAL_RETENTION_DAYS} day(s) from ${ARCHIVE_DIR}"
local list_file="${WORKDIR}/deleted_local_archives_${ts}.txt" find "$ARCHIVE_DIR" -type f -name "${ARCHIVE_PREFIX}_*.tar.*" -mtime "+${LOCAL_RETENTION_DAYS}" -print -delete 2>/dev/null || true
DELETED_LOCAL_LIST_FILE="$list_file"
: > "$list_file"
while IFS= read -r -d '' f; do
local sz
sz="$(stat -c '%s' "$f" 2>/dev/null || echo 0)"
DELETED_LOCAL_BYTES=$((DELETED_LOCAL_BYTES + sz))
DELETED_LOCAL_COUNT=$((DELETED_LOCAL_COUNT + 1))
echo "$f" >> "$list_file"
done < <(find "$ARCHIVE_DIR" -type f -name 'appbackup_*.tar.*' -mtime "+${min_age_days}" -print0 2>/dev/null || true)
if [[ "$DELETED_LOCAL_COUNT" -gt 0 ]]; then
echo "-- Local retention: deleting ${DELETED_LOCAL_COUNT} archive(s) older than ${LOCAL_RETENTION_DAYS} day(s) from ${ARCHIVE_DIR}"
while IFS= read -r f; do
rm -f -- "$f" || true
done < "$list_file"
echo "-- Local retention: freed approx $(human_bytes "$DELETED_LOCAL_BYTES")"
else
echo "-- Local retention: nothing to delete (keep last ${LOCAL_RETENTION_DAYS} day(s))"
fi
}
ensure_min_free_space() {
local min_bytes=$((MIN_FREE_GB * 1024 * 1024 * 1024))
local avail
avail="$(free_bytes_workdir_fs)"
echo "-- Free space on WORKDIR filesystem: $(human_bytes "$avail") (min required: ${MIN_FREE_GB}GiB)"
if [[ "$avail" -lt "$min_bytes" ]]; then
die "Not enough free space on WORKDIR filesystem (need >= ${MIN_FREE_GB}GiB). Aborting to prevent disk full."
fi
} }
send_report_mail() { send_report_mail() {
[[ "${ENABLE_MAIL_REPORT}" == "true" ]] || return 0 [[ "${ENABLE_MAIL_REPORT}" == "true" ]] || return 0
local SENDMAIL_BIN="/usr/sbin/sendmail" local SENDMAIL_BIN="/usr/sbin/sendmail"
if [[ ! -x "$SENDMAIL_BIN" ]]; then [[ -x "$SENDMAIL_BIN" ]] || { echo "WARN: sendmail missing at $SENDMAIL_BIN"; return 0; }
echo "WARN: sendmail binary not found/executable at $SENDMAIL_BIN - cannot send report mail"
return 0
fi
local end_epoch now duration subject host local end_epoch now duration subject host
end_epoch="$(date +%s)" end_epoch="$(date +%s)"
@@ -143,18 +112,6 @@ send_report_mail() {
host="$(hostname -f 2>/dev/null || hostname)" host="$(hostname -f 2>/dev/null || hostname)"
subject="${MAIL_SUBJECT_PREFIX} ${STATUS} ${host} ${ts}" subject="${MAIL_SUBJECT_PREFIX} ${STATUS} ${host} ${ts}"
local recs
recs=$(
cat <<'EOF'
Empfehlungen (mehr Resilienz):
- Regelmäßig Restore testen (Stichprobe): DB-Import + Dateien entpacken + App-Start prüfen.
- Verschlüsselung: Archiv zusätzlich clientseitig verschlüsseln (z.B. age/gpg) bevor Upload.
- Immutable/Versioned Backup-Ziel (wenn möglich): Schutz vor Ransomware/Löschung.
- Monitoring/Alerting: systemd unit failure => Benachrichtigung.
- Snapshots (LVM/Btrfs/ZFS): Bei großen Datenmengen besser als rsync; reduziert Downtime für Nextcloud.
EOF
)
{ {
echo "From: ${MAIL_FROM}" echo "From: ${MAIL_FROM}"
echo "To: ${MAIL_TO}" echo "To: ${MAIL_TO}"
@@ -171,69 +128,30 @@ EOF
[[ -n "${ERROR_SUMMARY}" ]] && echo "Fehler: ${ERROR_SUMMARY}" [[ -n "${ERROR_SUMMARY}" ]] && echo "Fehler: ${ERROR_SUMMARY}"
echo "Dauer: ${duration}s" echo "Dauer: ${duration}s"
echo echo
echo "Konfiguration"
echo "------------"
echo "Config: ${CONFIG_FILE}" echo "Config: ${CONFIG_FILE}"
echo "Log: ${LOG_FILE}" echo "Log: ${LOG_FILE}"
echo "Workdir: ${WORKDIR}" echo "Workdir: ${WORKDIR}"
echo "Archive dir: ${ARCHIVE_DIR}" echo "Archive dir: ${ARCHIVE_DIR}"
echo "Kompression: ${COMPRESSOR}" echo "Kompression: ${COMPRESSOR}"
echo echo
echo "Disk / Retention" echo "Remote"
echo "---------------" echo "------"
echo "Min. freier Speicher: ${MIN_FREE_GB}GiB" echo "Upload: ${ENABLE_UPLOAD}"
echo "Lokale Aufbewahrung: ${LOCAL_RETENTION_DAYS} Tage" echo "Remote base: ${RCLONE_REMOTE_BASE}"
echo "Gelöschte lokale Backups: ${DELETED_LOCAL_COUNT} (ca. $(human_bytes "${DELETED_LOCAL_BYTES}"))" echo "Upload Status: ${RCLONE_STATUS}"
if [[ -n "${DELETED_LOCAL_LIST_FILE}" && -s "${DELETED_LOCAL_LIST_FILE}" ]]; then [[ -n "${RCLONE_OUTPUT_FILE}" && -f "${RCLONE_OUTPUT_FILE}" ]] && { echo; echo "rclone Tail:"; tail -n 60 "${RCLONE_OUTPUT_FILE}" || true; }
echo "Gelöschte Dateien (erste 20):"
head -n 20 "${DELETED_LOCAL_LIST_FILE}" || true
fi
if [[ "${ENABLE_REMOTE_RETENTION}" == "true" ]]; then
echo "Remote Aufbewahrung: ${REMOTE_RETENTION_DAYS} Tage (rclone delete --min-age)"
else
echo "Remote Aufbewahrung: deaktiviert"
fi
echo
echo "Was wurde gesichert?"
echo "-------------------"
[[ "${ENABLE_WORDPRESS}" == "true" ]] && echo "- WordPress Dateien: ${WP_DIR}" || true
[[ "${ENABLE_DB_DUMPS}" == "true" && -n "${WP_DB_NAME:-}" ]] && echo "- WordPress DB: ${WP_DB_NAME}" || true
[[ "${ENABLE_NEXTCLOUD}" == "true" ]] && echo "- Nextcloud Dateien: ${NC_DIR}" || true
[[ "${ENABLE_NEXTCLOUD_DATA}" == "true" ]] && echo "- Nextcloud Data: ${NC_DATA_DIR}" || true
[[ "${ENABLE_DB_DUMPS}" == "true" && -n "${NC_DB_NAME:-}" ]] && echo "- Nextcloud DB: ${NC_DB_NAME}" || true
[[ "${ENABLE_MAIL}" == "true" ]] && echo "- Mail: ${MAIL_DIR:-<unset>} + ${POSTFIX_DIR:-<unset>} + ${DOVECOT_DIR:-<unset>}" || true
echo echo
echo "Größen" echo "Größen"
echo "------" echo "------"
if [[ -n "${SIZES_FILE}" && -f "${SIZES_FILE}" ]]; then [[ -n "${SIZES_FILE}" && -f "${SIZES_FILE}" ]] && cat "${SIZES_FILE}" || echo "(keine Größeninfos verfügbar)"
cat "${SIZES_FILE}"
else
echo "(keine Größeninfos verfügbar)"
fi
echo
echo "Upload (rclone)"
echo "--------------"
echo "Remote: ${RCLONE_REMOTE}"
echo "Upload Status: ${RCLONE_STATUS}"
if [[ -n "${RCLONE_OUTPUT_FILE}" && -f "${RCLONE_OUTPUT_FILE}" ]]; then
echo
echo "rclone Output (Tail):"
tail -n 50 "${RCLONE_OUTPUT_FILE}" || true
fi
echo
echo "${recs}"
echo echo
echo "Log-Auszug (Tail)" echo "Log-Auszug (Tail)"
echo "-----------------" echo "-----------------"
tail -n "${MAIL_INCLUDE_LOG_TAIL_LINES}" "${LOG_FILE}" || true tail -n "${MAIL_INCLUDE_LOG_TAIL_LINES}" "${LOG_FILE}" || true
} | "$SENDMAIL_BIN" -t || echo "WARN: sending mail failed (postfix queue may still accept it)" } | "$SENDMAIL_BIN" -t || echo "WARN: sending mail failed"
} }
cleanup_staging() { cleanup_staging() { [[ -n "${STAGING_DIR:-}" && -d "${STAGING_DIR:-}" ]] && rm -rf "${STAGING_DIR:?}"; }
if [[ -n "${STAGING_DIR:-}" && -d "${STAGING_DIR:-}" ]]; then
rm -rf "${STAGING_DIR:?}"
fi
}
# Nextcloud maintenance-mode safety trap # Nextcloud maintenance-mode safety trap
NC_MAINTENANCE_ON=false NC_MAINTENANCE_ON=false
@@ -245,21 +163,18 @@ nc_maintenance_off() {
fi fi
} }
on_error() { # Gitea service safety trap
local exit_code=$? GITEA_WAS_STOPPED=false
STATUS="FAIL" gitea_service_start() {
ERROR_SUMMARY="Exit code ${exit_code} (see log)" if [[ "${GITEA_WAS_STOPPED}" == "true" ]]; then
return 0 echo "-- Starting Gitea service (trap)..."
} systemctl start "${GITEA_SERVICE_NAME}" || true
GITEA_WAS_STOPPED=false
on_exit() { fi
local exit_code=$?
send_report_mail
nc_maintenance_off
cleanup_staging
exit "${exit_code}"
} }
on_error() { local ec=$?; STATUS="FAIL"; ERROR_SUMMARY="Exit code ${ec} (see log)"; return 0; }
on_exit() { local ec=$?; send_report_mail; nc_maintenance_off; gitea_service_start; cleanup_staging; exit "${ec}"; }
trap on_error ERR trap on_error ERR
trap on_exit EXIT trap on_exit EXIT
@@ -271,10 +186,9 @@ echo "-- Log: ${LOG_FILE}"
# ---------- Preconditions ---------- # ---------- Preconditions ----------
[[ $EUID -eq 0 ]] || die "Must run as root." [[ $EUID -eq 0 ]] || die "Must run as root."
mkdir -p "$WORKDIR" "$ARCHIVE_DIR" "$STAGING_ROOT" "$LOG_DIR" mkdir -p "$WORKDIR" "$ARCHIVE_DIR" "$STAGING_ROOT" "$LOG_DIR"
# ---------- Locking (prevents parallel runs) ---------- # ---------- Locking ----------
LOCKFILE="/run/app-backup.lock" LOCKFILE="/run/app-backup.lock"
exec 9>"$LOCKFILE" exec 9>"$LOCKFILE"
if ! flock -n 9; then if ! flock -n 9; then
@@ -284,200 +198,209 @@ if ! flock -n 9; then
fi fi
# ---------- Tools ---------- # ---------- Tools ----------
for t in tar rsync flock df find stat; do for t in tar rsync flock df find stat; do have "$t" || die "Missing required tool: $t"; done
have "$t" || die "Missing required tool: $t" if [[ "$COMPRESSOR" == "zstd" ]]; then have zstd || die "COMPRESSOR=zstd but zstd is missing"
done elif [[ "$COMPRESSOR" == "gzip" ]]; then have gzip || die "COMPRESSOR=gzip but gzip is missing"
else die "Unsupported COMPRESSOR=$COMPRESSOR (use zstd or gzip)"; fi
if [[ "$COMPRESSOR" == "zstd" ]]; then
have zstd || die "COMPRESSOR=zstd but zstd is missing"
elif [[ "$COMPRESSOR" == "gzip" ]]; then
have gzip || die "COMPRESSOR=gzip but gzip is missing"
else
die "Unsupported COMPRESSOR=$COMPRESSOR (use zstd or gzip)"
fi
if [[ "${ENABLE_DB_DUMPS}" == "true" ]]; then if [[ "${ENABLE_DB_DUMPS}" == "true" ]]; then
have mysqldump || die "ENABLE_DB_DUMPS=true but mysqldump missing" have mysqldump || die "ENABLE_DB_DUMPS=true but mysqldump missing"
have mysql || die "ENABLE_DB_DUMPS=true but mysql client missing"
fi fi
have "$RCLONE_BIN" || die "rclone not installed (missing: $RCLONE_BIN)" have "$RCLONE_BIN" || die "rclone not installed (missing: $RCLONE_BIN)"
# ---------- Disk safety: cleanup + free-space checks ---------- # ---------- Disk safety ----------
cleanup_old_local_archives cleanup_old_local_archives
ensure_min_free_space ensure_min_free_space
# ---------- Staging ---------- # ---------- Staging ----------
STAGING_DIR="${STAGING_ROOT}/run_${ts}" STAGING_DIR="${STAGING_ROOT}/run_${ts}"
mkdir -p "$STAGING_DIR"/{db,files,meta} mkdir -p "$STAGING_DIR"/{db,files,meta}
echo "$(date -Is)" > "$STAGING_DIR/meta/created_at.txt" echo "$(date -Is)" > "$STAGING_DIR/meta/created_at.txt"
echo "$(hostname -f 2>/dev/null || hostname)" > "$STAGING_DIR/meta/hostname.txt" echo "$(hostname -f 2>/dev/null || hostname)" > "$STAGING_DIR/meta/hostname.txt"
echo "${ts}" > "$STAGING_DIR/meta/timestamp.txt" echo "${ts}" > "$STAGING_DIR/meta/timestamp.txt"
# ---------- Services consistency ----------
if [[ "${ENABLE_GITEA:-false}" == "true" && "${ENABLE_GITEA_SERVICE_STOP:-true}" == "true" ]]; then
if systemctl is-active --quiet "${GITEA_SERVICE_NAME}"; then
echo "-- Stopping Gitea service for consistent backup: ${GITEA_SERVICE_NAME}"
systemctl stop "${GITEA_SERVICE_NAME}"
GITEA_WAS_STOPPED=true
fi
fi
# ---------- DB Dumps ---------- # ---------- DB Dumps ----------
if [[ "${ENABLE_DB_DUMPS}" == "true" ]]; then if [[ "${ENABLE_DB_DUMPS}" == "true" ]]; then
echo "-- DB dumps enabled" echo "-- DB dumps enabled"
if [[ -n "${WP_DB_NAME:-}" ]]; then dump_mysql_db() {
[[ -r "${WP_DB_CNF}" ]] || die "WP_DB_CNF not readable: ${WP_DB_CNF}" local cnf="$1" db="$2" out="$3"
echo "-- Dump WordPress DB: ${WP_DB_NAME}" [[ -r "$cnf" ]] || die "DB CNF not readable: $cnf"
mysqldump --defaults-extra-file="${WP_DB_CNF}" \ echo "-- Dump MySQL/MariaDB DB: ${db}"
--single-transaction --routines --triggers --hex-blob \ mysqldump --defaults-extra-file="$cnf" --single-transaction --routines --triggers --hex-blob "$db" > "$out"
"${WP_DB_NAME}" > "$STAGING_DIR/db/wordpress_${ts}.sql" }
fi
[[ -n "${WP_DB_NAME:-}" ]] && dump_mysql_db "${WP_DB_CNF}" "${WP_DB_NAME}" "$STAGING_DIR/db/wordpress_${ts}.sql" || true
if [[ -n "${NC_DB_NAME:-}" ]]; then if [[ -n "${NC_DB_NAME:-}" ]]; then
[[ -r "${NC_DB_CNF}" ]] || die "NC_DB_CNF not readable: ${NC_DB_CNF}" if [[ "${ENABLE_NEXTCLOUD_MAINTENANCE:-true}" == "true" ]]; then
echo "-- Dump Nextcloud DB: ${NC_DB_NAME}"
if [[ "${ENABLE_NEXTCLOUD_MAINTENANCE}" == "true" ]]; then
echo "-- Nextcloud maintenance mode ON..." echo "-- Nextcloud maintenance mode ON..."
sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:mode --on sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:mode --on
NC_MAINTENANCE_ON=true NC_MAINTENANCE_ON=true
fi fi
dump_mysql_db "${NC_DB_CNF}" "${NC_DB_NAME}" "$STAGING_DIR/db/nextcloud_${ts}.sql"
mysqldump --defaults-extra-file="${NC_DB_CNF}" \ if [[ "${ENABLE_NEXTCLOUD_MAINTENANCE:-true}" == "true" ]]; then
--single-transaction --routines --triggers --hex-blob \
"${NC_DB_NAME}" > "$STAGING_DIR/db/nextcloud_${ts}.sql"
if [[ "${ENABLE_NEXTCLOUD_MAINTENANCE}" == "true" ]]; then
echo "-- Nextcloud maintenance mode OFF..." echo "-- Nextcloud maintenance mode OFF..."
sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:mode --off sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:mode --off || true
NC_MAINTENANCE_ON=false NC_MAINTENANCE_ON=false
fi fi
fi fi
if [[ "${ENABLE_GITEA:-false}" == "true" ]]; then
# native gitea with MariaDB
[[ -n "${GITEA_DB_NAME:-}" ]] && dump_mysql_db "${GITEA_DB_CNF}" "${GITEA_DB_NAME}" "$STAGING_DIR/db/gitea_${ts}.sql" || echo "WARN: ENABLE_GITEA=true but GITEA_DB_NAME empty - skipping gitea DB"
fi
else else
echo "-- DB dumps disabled" echo "-- DB dumps disabled"
fi fi
# ---------- File Copies (rsync into staging for consistency) ---------- # ---------- File copies ----------
echo "-- Collecting files via rsync..." echo "-- Collecting files via rsync..."
rsync_dir() { rsync_dir() {
local src="$1" local src="$1"
local dst="$2" local dst="$2"
shift 2 || true shift 2 || true
[[ -d "$src" ]] || die "Source directory missing: $src" [[ -d "$src" ]] || die "Source directory missing: $src"
mkdir -p "$dst" mkdir -p "$dst"
# Remaining args are exclude patterns like "nextcloud/"
local excludes=() local excludes=()
while [[ $# -gt 0 ]]; do while [[ $# -gt 0 ]]; do excludes+=("--exclude=$1"); shift; done
excludes+=("--exclude=$1") rsync -aHAX --numeric-ids --delete --info=stats2 "${excludes[@]}" "$src"/ "$dst"/
shift
done
rsync -aHAX --numeric-ids --delete --info=stats2 \
"${excludes[@]}" \
"$src"/ "$dst"/
} }
compute_wp_excludes() { # WordPress webroot: exclude nextcloud/ if it lives below WP_DIR
# Returns exclude patterns via stdout, one per line if [[ "${ENABLE_WORDPRESS:-false}" == "true" ]]; then
if [[ "${WP_EXCLUDES_MODE}" == "manual" ]]; then
if declare -p WP_EXCLUDES >/dev/null 2>&1; then
for e in "${WP_EXCLUDES[@]}"; do
echo "$e"
done
fi
return 0
fi
# auto mode:
# If Nextcloud is enabled and sits inside WP_DIR (your setup), exclude "nextcloud/" from WP sync
if [[ "${ENABLE_NEXTCLOUD}" == "true" ]]; then
local wp="${WP_DIR%/}"
local nc="${NC_DIR%/}"
if [[ "$nc" == "$wp/nextcloud" ]]; then
echo "nextcloud/"
fi
fi
}
if [[ "${ENABLE_WORDPRESS}" == "true" ]]; then
echo "-- WordPress files: ${WP_DIR}" echo "-- WordPress files: ${WP_DIR}"
wp_excludes=()
mapfile -t _wp_excludes < <(compute_wp_excludes || true) if [[ "${ENABLE_NEXTCLOUD:-false}" == "true" ]]; then
if [[ "${#_wp_excludes[@]}" -gt 0 ]]; then wp="${WP_DIR%/}"; nc="${NC_DIR%/}"
echo "-- WordPress excludes: ${_wp_excludes[*]}" if [[ "$nc" == "$wp/nextcloud" ]]; then
rsync_dir "${WP_DIR}" "$STAGING_DIR/files/wordpress" "${_wp_excludes[@]}" wp_excludes+=("nextcloud/")
fi
fi
if [[ "${#wp_excludes[@]}" -gt 0 ]]; then
echo "-- WordPress excludes: ${wp_excludes[*]}"
rsync_dir "${WP_DIR}" "$STAGING_DIR/files/wordpress" "${wp_excludes[@]}"
else else
rsync_dir "${WP_DIR}" "$STAGING_DIR/files/wordpress" rsync_dir "${WP_DIR}" "$STAGING_DIR/files/wordpress"
fi fi
fi fi
if [[ "${ENABLE_NEXTCLOUD}" == "true" ]]; then # Nextcloud code: exclude data/
echo "-- Nextcloud files: ${NC_DIR}" if [[ "${ENABLE_NEXTCLOUD:-false}" == "true" ]]; then
rsync_dir "${NC_DIR}" "$STAGING_DIR/files/nextcloud" echo "-- Nextcloud code: ${NC_DIR} (excluding data/)"
rsync_dir "${NC_DIR}" "$STAGING_DIR/files/nextcloud" "data/"
if [[ "${ENABLE_NEXTCLOUD_DATA}" == "true" ]]; then : "${NC_DATA_DIR:=${NC_DIR%/}/data}"
if [[ "${ENABLE_NEXTCLOUD_DATA:-true}" == "true" ]]; then
echo "-- Nextcloud data: ${NC_DATA_DIR}" echo "-- Nextcloud data: ${NC_DATA_DIR}"
rsync_dir "${NC_DATA_DIR}" "$STAGING_DIR/files/nextcloud-data" rsync_dir "${NC_DATA_DIR}" "$STAGING_DIR/files/nextcloud-data"
fi fi
fi fi
if [[ "${ENABLE_MAIL}" == "true" ]]; then # Gitea files (based on app.ini APP_DATA_PATH)
echo "-- Mail files..." if [[ "${ENABLE_GITEA:-false}" == "true" ]]; then
[[ -n "${MAIL_DIR:-}" && -d "${MAIL_DIR}" ]] && rsync_dir "${MAIL_DIR}" "$STAGING_DIR/files/mail" || true : "${GITEA_DATA_DIR:=/var/lib/gitea/data}"
[[ -n "${POSTFIX_DIR:-}" && -d "${POSTFIX_DIR}" ]] && rsync -aHAX "${POSTFIX_DIR}/" "$STAGING_DIR/files/postfix/" || true echo "-- Gitea data dir: ${GITEA_DATA_DIR}"
[[ -n "${DOVECOT_DIR:-}" && -d "${DOVECOT_DIR}" ]] && rsync -aHAX "${DOVECOT_DIR}/" "$STAGING_DIR/files/dovecot/" || true rsync_dir "${GITEA_DATA_DIR}" "$STAGING_DIR/files/gitea-data"
: "${GITEA_ETC_DIR:=/etc/gitea}"
if [[ -n "${GITEA_ETC_DIR}" && -d "${GITEA_ETC_DIR}" ]]; then
echo "-- Gitea config dir: ${GITEA_ETC_DIR}"
rsync_dir "${GITEA_ETC_DIR}" "$STAGING_DIR/files/gitea-etc"
fi
fi fi
# ---------- Size summary ---------- # ---------- Size summary ----------
SIZES_FILE="${STAGING_DIR}/meta/sizes.txt" SIZES_FILE="${STAGING_DIR}/meta/sizes.txt"
{ {
echo "WordPress staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/wordpress")")"
echo "Nextcloud staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/nextcloud")")"
echo "Nextcloud-data staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/nextcloud-data")")"
echo "Mail staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/mail")")"
echo "DB dumps staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/db")")" echo "DB dumps staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/db")")"
echo "WordPress staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/wordpress")")"
echo "Nextcloud code staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/nextcloud")")"
echo "Nextcloud data staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/nextcloud-data")")"
echo "Gitea data staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/gitea-data")")"
echo "Gitea etc staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/gitea-etc")")"
echo "Staging total: $(human_bytes "$(bytes_of_path "$STAGING_DIR")")" echo "Staging total: $(human_bytes "$(bytes_of_path "$STAGING_DIR")")"
} > "$SIZES_FILE" || true } > "$SIZES_FILE" || true
# ---------- Disk safety: check again after staging ----------
ensure_min_free_space ensure_min_free_space
# ---------- Archive ---------- # ---------- Create separate archives ----------
archive_base="appbackup_${ts}" make_archive() {
tar_file="${ARCHIVE_DIR}/${archive_base}.tar" local label="$1" src_rel="$2"
local tar_file="${ARCHIVE_DIR}/${ARCHIVE_PREFIX}_${ts}_${label}.tar"
local out_file
echo "-- Creating tar: ${tar_file}" echo "-- Creating archive (${label}): ${tar_file}"
( (
cd "$STAGING_DIR" cd "$STAGING_DIR"
tar --numeric-owner --xattrs --acls -cf "$tar_file" . tar --numeric-owner --xattrs --acls -cf "$tar_file" "$src_rel"
) )
if [[ "$COMPRESSOR" == "zstd" ]]; then if [[ "$COMPRESSOR" == "zstd" ]]; then
ARCHIVE_FILE="${tar_file}.zst" out_file="${tar_file}.zst"
echo "-- Compressing (zstd): ${ARCHIVE_FILE}" echo "-- Compressing (zstd): ${out_file}"
ionice -c "${IONICE_CLASS}" -n "${IONICE_LEVEL}" nice -n "${NICE_LEVEL}" \ ionice -c "${IONICE_CLASS}" -n "${IONICE_LEVEL}" nice -n "${NICE_LEVEL}" zstd -T0 -19 --rm "$tar_file"
zstd -T0 -19 --rm "$tar_file" zstd -t "$out_file"
echo "-- Testing zstd integrity..." else
zstd -t "$ARCHIVE_FILE" out_file="${tar_file}.gz"
elif [[ "$COMPRESSOR" == "gzip" ]]; then echo "-- Compressing (gzip): ${out_file}"
ARCHIVE_FILE="${tar_file}.gz" ionice -c "${IONICE_CLASS}" -n "${IONICE_LEVEL}" nice -n "${NICE_LEVEL}" gzip -9 "$tar_file"
echo "-- Compressing (gzip): ${ARCHIVE_FILE}" gzip -t "$out_file"
ionice -c "${IONICE_CLASS}" -n "${IONICE_LEVEL}" nice -n "${NICE_LEVEL}" \
gzip -9 "$tar_file"
echo "-- Testing gzip integrity..."
gzip -t "$ARCHIVE_FILE"
fi fi
echo "-- Archive ready: ${ARCHIVE_FILE}" echo "$out_file"
echo "-- Archive size: $(du -h "$ARCHIVE_FILE" | awk '{print $1}')" }
# ---------- rclone remote health-check ---------- ARCHIVES=()
echo "-- rclone remote check: ${RCLONE_REMOTE}" ARCHIVES+=("$(make_archive "meta" "meta")")
"$RCLONE_BIN" lsf "${RCLONE_REMOTE}" --max-depth 1 >/dev/null 2>&1 || die "Remote not reachable: ${RCLONE_REMOTE}"
# ---------- Upload ---------- if [[ -d "$STAGING_DIR/db" && -n "$(ls -A "$STAGING_DIR/db" 2>/dev/null || true)" ]]; then
ARCHIVES+=("$(make_archive "db" "db")")
fi
[[ "${ENABLE_WORDPRESS:-false}" == "true" ]] && ARCHIVES+=("$(make_archive "wordpress" "files/wordpress")") || true
if [[ "${ENABLE_NEXTCLOUD:-false}" == "true" ]]; then
ARCHIVES+=("$(make_archive "nextcloud" "files/nextcloud")")
[[ "${ENABLE_NEXTCLOUD_DATA:-true}" == "true" ]] && ARCHIVES+=("$(make_archive "nextcloud-data" "files/nextcloud-data")") || true
fi
if [[ "${ENABLE_GITEA:-false}" == "true" ]]; then
ARCHIVES+=("$(make_archive "gitea" "files/gitea-data")")
if [[ -d "$STAGING_DIR/files/gitea-etc" && -n "$(ls -A "$STAGING_DIR/files/gitea-etc" 2>/dev/null || true)" ]]; then
ARCHIVES+=("$(make_archive "gitea-etc" "files/gitea-etc")")
fi
fi
echo "-- Archives created:"
for f in "${ARCHIVES[@]}"; do
echo " - $f ($(du -h "$f" | awk '{print $1}'))"
done
# restart gitea before upload
gitea_service_start
# ---------- Upload via rclone ----------
if [[ "${ENABLE_UPLOAD}" == "true" ]]; then
RCLONE_OUTPUT_FILE="${LOG_DIR}/rclone_${ts}.log" RCLONE_OUTPUT_FILE="${LOG_DIR}/rclone_${ts}.log"
echo "-- Uploading via rclone (output: ${RCLONE_OUTPUT_FILE})..."
RCLONE_STATUS="RUNNING" RCLONE_STATUS="RUNNING"
remote_run="${RCLONE_REMOTE_BASE}/${ARCHIVE_PREFIX}_${ts}"
RCLONE_ARGS=( echo "-- rclone remote check: ${RCLONE_REMOTE_BASE}"
"copy" "$ARCHIVE_FILE" "${RCLONE_REMOTE}" "$RCLONE_BIN" lsf "${RCLONE_REMOTE_BASE}" --max-depth 1 >/dev/null 2>&1 || die "Remote not reachable: ${RCLONE_REMOTE_BASE}"
echo "-- Creating remote folder: ${remote_run}"
"$RCLONE_BIN" mkdir "${remote_run}" >/dev/null 2>&1 || true
common_args=(
"--checksum" "--checksum"
"--retries" "${RCLONE_RETRIES}" "--retries" "${RCLONE_RETRIES}"
"--low-level-retries" "${RCLONE_LOW_LEVEL_RETRIES}" "--low-level-retries" "${RCLONE_LOW_LEVEL_RETRIES}"
@@ -485,30 +408,34 @@ RCLONE_ARGS=(
"--stats" "${RCLONE_STATS}" "--stats" "${RCLONE_STATS}"
"--stats-one-line" "--stats-one-line"
"--log-level" "INFO" "--log-level" "INFO"
"--transfers" "4" "--transfers" "${RCLONE_TRANSFERS}"
"--checkers" "8" "--checkers" "${RCLONE_CHECKERS}"
"--timeout" "${RCLONE_TIMEOUT}"
"--contimeout" "${RCLONE_CONTIMEOUT}"
"--onedrive-chunk-size" "${RCLONE_ONEDRIVE_CHUNK_SIZE}"
) )
[[ "${RCLONE_BWLIMIT}" != "0" ]] && common_args+=("--bwlimit" "${RCLONE_BWLIMIT}") || true
if [[ "${RCLONE_BWLIMIT}" != "0" ]]; then echo "-- Uploading archives to: ${remote_run} (log: ${RCLONE_OUTPUT_FILE})"
RCLONE_ARGS+=("--bwlimit" "${RCLONE_BWLIMIT}") for f in "${ARCHIVES[@]}"; do
fi echo "-- Upload: $(basename "$f")"
if ionice -c "${IONICE_CLASS}" -n "${IONICE_LEVEL}" nice -n "${NICE_LEVEL}" "$RCLONE_BIN" copy "$f" "${remote_run}" "${common_args[@]}" | tee -a "$RCLONE_OUTPUT_FILE"
if ionice -c "${IONICE_CLASS}" -n "${IONICE_LEVEL}" nice -n "${NICE_LEVEL}" \
"$RCLONE_BIN" "${RCLONE_ARGS[@]}" | tee -a "$RCLONE_OUTPUT_FILE"
then then
RCLONE_STATUS="OK" :
else else
RCLONE_STATUS="FAIL" RCLONE_STATUS="FAIL"
die "Upload failed (see ${RCLONE_OUTPUT_FILE})" die "Upload failed for $(basename "$f") (see ${RCLONE_OUTPUT_FILE})"
fi fi
done
RCLONE_STATUS="OK"
# ---------- Remote retention ----------
if [[ "${ENABLE_REMOTE_RETENTION}" == "true" ]]; then if [[ "${ENABLE_REMOTE_RETENTION}" == "true" ]]; then
echo "-- Remote retention: delete older than ${REMOTE_RETENTION_DAYS}d" echo "-- Remote retention: delete objects older than ${REMOTE_RETENTION_DAYS}d (best effort)"
"$RCLONE_BIN" delete "${RCLONE_REMOTE}" --min-age "${REMOTE_RETENTION_DAYS}d" --log-level INFO || true "$RCLONE_BIN" delete "${RCLONE_REMOTE_BASE}" --min-age "${REMOTE_RETENTION_DAYS}d" --log-level INFO || true
fi
else
echo "-- Upload disabled (ENABLE_UPLOAD=false)"
fi fi
# ---------- Local retention (again, to enforce after new archive) ----------
cleanup_old_local_archives cleanup_old_local_archives
echo "== app-backup done: ${ts} ==" echo "== app-backup done: ${ts} =="

View File

@@ -2,14 +2,18 @@
set -Eeuo pipefail set -Eeuo pipefail
umask 027 umask 027
# ---------- Logging ---------- # ==============================================================================
# app-restore.sh
# - Restore from a "run folder" (local dir or rclone remote folder)
# - Applies archives per component (meta/db/wordpress/nextcloud/nextcloud-data/gitea...)
# ==============================================================================
LOG_DIR="/var/log/app-backup" LOG_DIR="/var/log/app-backup"
mkdir -p "$LOG_DIR" mkdir -p "$LOG_DIR"
ts="$(date '+%Y-%m-%d_%H-%M-%S')" ts="$(date '+%Y-%m-%d_%H-%M-%S')"
LOG_FILE="${LOG_DIR}/app-restore_${ts}.log" LOG_FILE="${LOG_DIR}/app-restore_${ts}.log"
exec > >(tee -a "$LOG_FILE" | systemd-cat -t app-restore -p info) 2>&1 exec > >(tee -a "$LOG_FILE" | systemd-cat -t app-restore -p info) 2>&1
# ---------- Config ----------
CONFIG_FILE="/etc/app-backup/app-backup.conf" CONFIG_FILE="/etc/app-backup/app-backup.conf"
if [[ -r "$CONFIG_FILE" ]]; then if [[ -r "$CONFIG_FILE" ]]; then
# shellcheck disable=SC1090 # shellcheck disable=SC1090
@@ -19,33 +23,35 @@ else
exit 2 exit 2
fi fi
# ---------- Defaults ----------
: "${WORKDIR:=/var/backups/app-backup}" : "${WORKDIR:=/var/backups/app-backup}"
: "${RESTORE_ROOT:=${WORKDIR}/restore}" : "${RESTORE_ROOT:=${WORKDIR}/restore}"
: "${RCLONE_REMOTE:=onedrive:Sicherung}" : "${ARCHIVE_PREFIX:=appbackup}"
: "${RCLONE_BIN:=rclone}"
: "${DRY_RUN:=false}" # true = show what would happen : "${RCLONE_BIN:=rclone}"
: "${RESTORE_DB:=true}" # true/false : "${RCLONE_REMOTE_BASE:=OneDrive:Sicherung/JRITServerBackups/$(hostname -s)}"
: "${RESTORE_FILES:=true}" # true/false
: "${RESTORE_STRICT_DELETE:=false}" # true = rsync --delete on restore : "${DRY_RUN:=false}"
: "${RESTORE_DB:=true}"
: "${RESTORE_FILES:=true}"
: "${RESTORE_STRICT_DELETE:=false}"
: "${ENABLE_NEXTCLOUD_MAINTENANCE:=true}" : "${ENABLE_NEXTCLOUD_MAINTENANCE:=true}"
: "${NC_OCC_USER:=apache}" : "${NC_OCC_USER:=apache}"
: "${NC_FILES_SCAN_AFTER_RESTORE:=false}" : "${NC_FILES_SCAN_AFTER_RESTORE:=false}"
: "${ENABLE_GITEA_SERVICE_STOP:=true}"
: "${GITEA_SERVICE_NAME:=gitea}"
: "${ENABLE_HTTPD_STOP:=false}"
: "${HTTPD_SERVICE_NAME:=httpd}"
: "${ENABLE_PHPFPM_STOP:=false}"
: "${PHPFPM_SERVICE_NAME:=php-fpm}"
die() { echo "ERROR: $*"; exit 1; } die() { echo "ERROR: $*"; exit 1; }
have() { command -v "$1" >/dev/null 2>&1; } have() { command -v "$1" >/dev/null 2>&1; }
run_cmd() { run_cmd() { [[ "${DRY_RUN}" == "true" ]] && echo "[DRY_RUN] $*" || "$@"; }
if [[ "${DRY_RUN}" == "true" ]]; then
echo "[DRY_RUN] $*"
else
"$@"
fi
}
# Nextcloud maintenance-mode safety trap
NC_MAINTENANCE_ON=false NC_MAINTENANCE_ON=false
nc_maintenance_off() { nc_maintenance_off() {
if [[ "${NC_MAINTENANCE_ON}" == "true" ]]; then if [[ "${NC_MAINTENANCE_ON}" == "true" ]]; then
@@ -55,215 +61,176 @@ nc_maintenance_off() {
fi fi
} }
on_exit() { GITEA_WAS_STOPPED=false
local exit_code=$? HTTPD_WAS_STOPPED=false
nc_maintenance_off PHPFPM_WAS_STOPPED=false
exit "${exit_code}" gitea_start() { [[ "${GITEA_WAS_STOPPED}" == "true" ]] && { echo "-- Starting gitea (trap)"; run_cmd systemctl start "${GITEA_SERVICE_NAME}" || true; GITEA_WAS_STOPPED=false; }; }
} httpd_start() { [[ "${HTTPD_WAS_STOPPED}" == "true" ]] && { echo "-- Starting httpd (trap)"; run_cmd systemctl start "${HTTPD_SERVICE_NAME}" || true; HTTPD_WAS_STOPPED=false; }; }
phpfpm_start(){ [[ "${PHPFPM_WAS_STOPPED}" == "true" ]] && { echo "-- Starting php-fpm (trap)"; run_cmd systemctl start "${PHPFPM_SERVICE_NAME}" || true; PHPFPM_WAS_STOPPED=false; }; }
on_exit() { local ec=$?; nc_maintenance_off; gitea_start; httpd_start; phpfpm_start; exit "${ec}"; }
trap on_exit EXIT trap on_exit EXIT
# ---------- Preconditions ----------
[[ $EUID -eq 0 ]] || die "Must run as root." [[ $EUID -eq 0 ]] || die "Must run as root."
for t in tar rsync flock df find stat; do have "$t" || die "Missing required tool: $t"; done
for t in tar rsync flock df find stat; do
have "$t" || die "Missing required tool: $t"
done
mkdir -p "$WORKDIR" "$RESTORE_ROOT" "$LOG_DIR" mkdir -p "$WORKDIR" "$RESTORE_ROOT" "$LOG_DIR"
# ---------- Locking ----------
LOCKFILE="/run/app-backup.lock" LOCKFILE="/run/app-backup.lock"
exec 9>"$LOCKFILE" exec 9>"$LOCKFILE"
if ! flock -n 9; then flock -n 9 || die "Another backup/restore already running (lock: $LOCKFILE)"
die "Another backup/restore already running (lock: $LOCKFILE)"
fi
# ---------- Input ---------- usage() {
# Usage: cat <<EOF
# app-restore.sh /path/to/appbackup_YYYY-mm-dd_HH-MM-SS.tar.zst Usage:
# or $0 --remote-run <run_folder_name> # e.g. ${ARCHIVE_PREFIX}_2026-02-11_02-31-28
# app-restore.sh /path/to/appbackup_YYYY-mm-dd_HH-MM-SS.tar.gz $0 --local-run <path_to_run_dir> # directory containing archives
# or Options:
# app-restore.sh --remote appbackup_YYYY-mm-dd_HH-MM-SS.tar.zst --dry-run
# (copies from RCLONE_REMOTE to RESTORE_ROOT first) --no-db
ARCHIVE_PATH="" --no-files
REMOTE_NAME="" EOF
}
if [[ "${1:-}" == "--remote" ]]; then REMOTE_RUN=""
REMOTE_NAME="${2:-}" LOCAL_RUN=""
[[ -n "$REMOTE_NAME" ]] || die "Usage: $0 --remote <archive_filename>" while [[ $# -gt 0 ]]; do
have "$RCLONE_BIN" || die "rclone missing but --remote used" case "$1" in
--remote-run) REMOTE_RUN="${2:-}"; shift 2;;
--local-run) LOCAL_RUN="${2:-}"; shift 2;;
--dry-run) DRY_RUN=true; shift;;
--no-db) RESTORE_DB=false; shift;;
--no-files) RESTORE_FILES=false; shift;;
-h|--help) usage; exit 0;;
*) die "Unknown arg: $1";;
esac
done
[[ -z "${REMOTE_RUN}" && -z "${LOCAL_RUN}" ]] && { usage; exit 2; }
RUN_DIR="${RESTORE_ROOT}/run_${ts}"
DOWNLOAD_DIR="${RUN_DIR}/downloads"
EXTRACT_DIR="${RUN_DIR}/extract"
mkdir -p "$DOWNLOAD_DIR" "$EXTRACT_DIR"
if [[ -n "${REMOTE_RUN}" ]]; then
have "$RCLONE_BIN" || die "rclone missing but --remote-run used"
remote_path="${RCLONE_REMOTE_BASE}/${REMOTE_RUN}"
echo "-- Fetching archives from remote: ${remote_path} -> ${DOWNLOAD_DIR}"
run_cmd "$RCLONE_BIN" copy "${remote_path}" "${DOWNLOAD_DIR}" --checksum --log-level INFO
SRC_DIR="${DOWNLOAD_DIR}"
else else
ARCHIVE_PATH="${1:-}" [[ -d "${LOCAL_RUN}" ]] || die "Local run dir not found: ${LOCAL_RUN}"
[[ -n "$ARCHIVE_PATH" ]] || die "Usage: $0 <archive_file.tar.zst|tar.gz> OR $0 --remote <archive_filename>" SRC_DIR="${LOCAL_RUN}"
fi fi
echo "== app-restore start: ${ts} ==" echo "== app-restore start: ${ts} =="
echo "-- Config: ${CONFIG_FILE}" echo "-- Source dir: ${SRC_DIR}"
echo "-- Log: ${LOG_FILE}"
echo "-- DRY_RUN: ${DRY_RUN}" echo "-- DRY_RUN: ${DRY_RUN}"
echo "-- RESTORE_FILES: ${RESTORE_FILES}"
echo "-- RESTORE_DB: ${RESTORE_DB}"
echo "-- STRICT_DELETE: ${RESTORE_STRICT_DELETE}"
# ---------- Fetch from remote if requested ---------- detect_tar_flags() { case "$1" in *.tar.zst) echo "--zstd" ;; *.tar.gz) echo "-z" ;; *) die "Unsupported archive: $1" ;; esac; }
if [[ -n "$REMOTE_NAME" ]]; then extract_archive() {
ARCHIVE_PATH="${RESTORE_ROOT}/${REMOTE_NAME}" local f="$1" flags; flags="$(detect_tar_flags "$f")"
echo "-- Fetching from remote: ${RCLONE_REMOTE}/${REMOTE_NAME} -> ${ARCHIVE_PATH}" echo "-- Extract: $(basename "$f") -> ${EXTRACT_DIR}"
run_cmd "$RCLONE_BIN" copy "${RCLONE_REMOTE}/${REMOTE_NAME}" "${RESTORE_ROOT}" --checksum --log-level INFO [[ "${DRY_RUN}" == "true" ]] && echo "[DRY_RUN] tar ${flags} -xf $f -C ${EXTRACT_DIR}" || tar ${flags} -xf "$f" -C "$EXTRACT_DIR"
}
pick_one() { ls -1 "${SRC_DIR}"/$1 2>/dev/null | sort | tail -n 1 || true; }
# stop services (optional)
if [[ "${ENABLE_HTTPD_STOP}" == "true" ]] && systemctl is-active --quiet "${HTTPD_SERVICE_NAME}"; then
echo "-- Stopping httpd for restore: ${HTTPD_SERVICE_NAME}"
run_cmd systemctl stop "${HTTPD_SERVICE_NAME}"; HTTPD_WAS_STOPPED=true
fi
if [[ "${ENABLE_PHPFPM_STOP}" == "true" ]] && systemctl is-active --quiet "${PHPFPM_SERVICE_NAME}"; then
echo "-- Stopping php-fpm for restore: ${PHPFPM_SERVICE_NAME}"
run_cmd systemctl stop "${PHPFPM_SERVICE_NAME}"; PHPFPM_WAS_STOPPED=true
fi
if [[ "${ENABLE_GITEA:-false}" == "true" && "${ENABLE_GITEA_SERVICE_STOP}" == "true" ]] && systemctl is-active --quiet "${GITEA_SERVICE_NAME}"; then
echo "-- Stopping gitea for restore: ${GITEA_SERVICE_NAME}"
run_cmd systemctl stop "${GITEA_SERVICE_NAME}"; GITEA_WAS_STOPPED=true
fi fi
[[ -f "$ARCHIVE_PATH" ]] || die "Archive not found: $ARCHIVE_PATH" # nextcloud maintenance
if [[ "${ENABLE_NEXTCLOUD:-false}" == "true" && "${ENABLE_NEXTCLOUD_MAINTENANCE}" == "true" ]] && [[ -d "${NC_DIR}" && -f "${NC_DIR}/occ" ]]; then
# ---------- Detect compression ----------
ARCHIVE_BASENAME="$(basename "$ARCHIVE_PATH")"
IS_ZSTD=false
IS_GZIP=false
case "$ARCHIVE_BASENAME" in
*.tar.zst) IS_ZSTD=true ;;
*.tar.gz) IS_GZIP=true ;;
*)
# fallback: try file(1)
if have file; then
ftype="$(file -b "$ARCHIVE_PATH" || true)"
if echo "$ftype" | grep -qi zstd; then
IS_ZSTD=true
elif echo "$ftype" | grep -qi gzip; then
IS_GZIP=true
else
die "Cannot detect archive compression for: $ARCHIVE_PATH"
fi
else
die "Unknown archive extension and file(1) not available: $ARCHIVE_PATH"
fi
;;
esac
if [[ "$IS_ZSTD" == "true" ]]; then
have zstd || die "zstd archive but zstd missing"
elif [[ "$IS_GZIP" == "true" ]]; then
have gzip || die "gzip archive but gzip missing"
fi
# ---------- Extract ----------
RUN_DIR="${RESTORE_ROOT}/run_${ts}"
STAGING_DIR="${RUN_DIR}/staging"
mkdir -p "$STAGING_DIR"
echo "-- Extracting archive to: ${STAGING_DIR}"
if [[ "$IS_ZSTD" == "true" ]]; then
run_cmd tar --zstd -xf "$ARCHIVE_PATH" -C "$STAGING_DIR"
elif [[ "$IS_GZIP" == "true" ]]; then
run_cmd tar -xzf "$ARCHIVE_PATH" -C "$STAGING_DIR"
fi
[[ -d "$STAGING_DIR/files" ]] || die "Invalid archive content: missing files/ in extracted staging"
# ---------- Maintenance mode (Nextcloud) ----------
if [[ "${ENABLE_NEXTCLOUD}" == "true" && "${ENABLE_NEXTCLOUD_MAINTENANCE}" == "true" ]]; then
if [[ -d "${NC_DIR}" && -f "${NC_DIR}/occ" ]]; then
echo "-- Nextcloud maintenance mode ON..." echo "-- Nextcloud maintenance mode ON..."
run_cmd sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:mode --on run_cmd sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:mode --on
NC_MAINTENANCE_ON=true NC_MAINTENANCE_ON=true
else
echo "WARN: Nextcloud directory/occ not found at NC_DIR=${NC_DIR} - cannot toggle maintenance mode"
fi
fi fi
# ---------- Restore files ---------- # extract archives
meta_arc="$(pick_one "${ARCHIVE_PREFIX}_*_meta.tar.*")"; [[ -n "$meta_arc" ]] && extract_archive "$meta_arc" || true
db_arc="$(pick_one "${ARCHIVE_PREFIX}_*_db.tar.*")"
wp_arc="$(pick_one "${ARCHIVE_PREFIX}_*_wordpress.tar.*")"
nc_arc="$(pick_one "${ARCHIVE_PREFIX}_*_nextcloud.tar.*")"
ncd_arc="$(pick_one "${ARCHIVE_PREFIX}_*_nextcloud-data.tar.*")"
g_arc="$(pick_one "${ARCHIVE_PREFIX}_*_gitea.tar.*")"
g_etc_arc="$(pick_one "${ARCHIVE_PREFIX}_*_gitea-etc.tar.*")"
[[ -n "$db_arc" ]] && extract_archive "$db_arc" || true
[[ -n "$wp_arc" && "${RESTORE_FILES}" == "true" ]] && extract_archive "$wp_arc" || true
[[ -n "$nc_arc" && "${RESTORE_FILES}" == "true" ]] && extract_archive "$nc_arc" || true
[[ -n "$ncd_arc" && "${RESTORE_FILES}" == "true" ]] && extract_archive "$ncd_arc" || true
[[ -n "$g_arc" && "${RESTORE_FILES}" == "true" ]] && extract_archive "$g_arc" || true
[[ -n "$g_etc_arc" && "${RESTORE_FILES}" == "true" ]] && extract_archive "$g_etc_arc" || true
rsync_restore_dir() { rsync_restore_dir() {
local src="$1" local src="$1" dst="$2"
local dst="$2"
[[ -d "$src" ]] || die "Restore source missing: $src" [[ -d "$src" ]] || die "Restore source missing: $src"
mkdir -p "$dst" mkdir -p "$dst"
local del=(); [[ "${RESTORE_STRICT_DELETE}" == "true" ]] && del=(--delete)
local delete_flag=() run_cmd rsync -aHAX --numeric-ids --info=stats2 "${del[@]}" "$src"/ "$dst"/
if [[ "${RESTORE_STRICT_DELETE}" == "true" ]]; then
delete_flag=(--delete)
fi
run_cmd rsync -aHAX --numeric-ids --info=stats2 \
"${delete_flag[@]}" \
"$src"/ "$dst"/
} }
if [[ "${RESTORE_FILES}" == "true" ]]; then if [[ "${RESTORE_FILES}" == "true" ]]; then
echo "-- Restoring files..." echo "-- Restoring files..."
if [[ -d "${EXTRACT_DIR}/files/wordpress" && "${ENABLE_WORDPRESS:-false}" == "true" ]]; then
if [[ "${ENABLE_WORDPRESS}" == "true" ]]; then echo "-- WordPress -> ${WP_DIR}"
echo "-- Restore WordPress (webroot) to: ${WP_DIR}" rsync_restore_dir "${EXTRACT_DIR}/files/wordpress" "${WP_DIR}"
# Backup excluded nextcloud/ automatically, so this should not overwrite Nextcloud.
rsync_restore_dir "$STAGING_DIR/files/wordpress" "${WP_DIR}"
fi fi
if [[ "${ENABLE_NEXTCLOUD}" == "true" ]]; then if [[ "${ENABLE_NEXTCLOUD:-false}" == "true" ]]; then
echo "-- Restore Nextcloud code to: ${NC_DIR}" [[ -d "${EXTRACT_DIR}/files/nextcloud" ]] && { echo "-- Nextcloud code -> ${NC_DIR}"; rsync_restore_dir "${EXTRACT_DIR}/files/nextcloud" "${NC_DIR}"; }
rsync_restore_dir "$STAGING_DIR/files/nextcloud" "${NC_DIR}" : "${NC_DATA_DIR:=${NC_DIR%/}/data}"
[[ -d "${EXTRACT_DIR}/files/nextcloud-data" && "${ENABLE_NEXTCLOUD_DATA:-true}" == "true" ]] && { echo "-- Nextcloud data -> ${NC_DATA_DIR}"; rsync_restore_dir "${EXTRACT_DIR}/files/nextcloud-data" "${NC_DATA_DIR}"; }
if [[ "${ENABLE_NEXTCLOUD_DATA}" == "true" ]]; then
echo "-- Restore Nextcloud data to: ${NC_DATA_DIR}"
rsync_restore_dir "$STAGING_DIR/files/nextcloud-data" "${NC_DATA_DIR}"
fi
fi fi
if [[ "${ENABLE_MAIL}" == "true" ]]; then if [[ "${ENABLE_GITEA:-false}" == "true" ]]; then
echo "-- Restore mail files..." : "${GITEA_DATA_DIR:=/var/lib/gitea/data}"
[[ -d "$STAGING_DIR/files/mail" && -n "${MAIL_DIR:-}" ]] && rsync_restore_dir "$STAGING_DIR/files/mail" "${MAIL_DIR}" || true [[ -d "${EXTRACT_DIR}/files/gitea-data" ]] && { echo "-- Gitea data -> ${GITEA_DATA_DIR}"; rsync_restore_dir "${EXTRACT_DIR}/files/gitea-data" "${GITEA_DATA_DIR}"; }
[[ -d "$STAGING_DIR/files/postfix" && -n "${POSTFIX_DIR:-}" ]] && rsync_restore_dir "$STAGING_DIR/files/postfix" "${POSTFIX_DIR}" || true : "${GITEA_ETC_DIR:=/etc/gitea}"
[[ -d "$STAGING_DIR/files/dovecot" && -n "${DOVECOT_DIR:-}" ]] && rsync_restore_dir "$STAGING_DIR/files/dovecot" "${DOVECOT_DIR}" || true [[ -d "${EXTRACT_DIR}/files/gitea-etc" && -n "${GITEA_ETC_DIR:-}" ]] && { echo "-- Gitea etc -> ${GITEA_ETC_DIR}"; rsync_restore_dir "${EXTRACT_DIR}/files/gitea-etc" "${GITEA_ETC_DIR}"; }
fi fi
else else
echo "-- RESTORE_FILES=false (skipping file restore)" echo "-- RESTORE_FILES=false (skipping)"
fi fi
# ---------- Restore databases ----------
mysql_restore_sql() { mysql_restore_sql() {
local cnf="$1" local cnf="$1" db="$2" sql="$3"
local db="$2"
local sql_file="$3"
[[ -r "$cnf" ]] || die "DB CNF not readable: $cnf" [[ -r "$cnf" ]] || die "DB CNF not readable: $cnf"
[[ -r "$sql_file" ]] || die "SQL file not readable: $sql_file" [[ -r "$sql" ]] || die "SQL not readable: $sql"
have mysql || die "mysql client missing"
echo "-- Import DB: ${db} from ${sql_file}" echo "-- Import MySQL/MariaDB DB: ${db} from $(basename "$sql")"
run_cmd mysql --defaults-extra-file="$cnf" "$db" < "$sql_file" run_cmd mysql --defaults-extra-file="$cnf" "$db" < "$sql"
} }
if [[ "${RESTORE_DB}" == "true" ]]; then if [[ "${RESTORE_DB}" == "true" && -d "${EXTRACT_DIR}/db" ]]; then
echo "-- Restoring databases..." echo "-- Restoring databases..."
wp_sql="$(ls -1 "${EXTRACT_DIR}/db"/wordpress_*.sql 2>/dev/null | sort | tail -n 1 || true)"
nc_sql="$(ls -1 "${EXTRACT_DIR}/db"/nextcloud_*.sql 2>/dev/null | sort | tail -n 1 || true)"
g_sql="$(ls -1 "${EXTRACT_DIR}/db"/gitea_*.sql 2>/dev/null | sort | tail -n 1 || true)"
if [[ -n "${WP_DB_NAME:-}" ]]; then [[ -n "${WP_DB_NAME:-}" && -n "$wp_sql" ]] && mysql_restore_sql "${WP_DB_CNF}" "${WP_DB_NAME}" "$wp_sql" || echo "WARN: WP DB dump missing"
wp_sql="$(ls -1 "$STAGING_DIR/db"/wordpress_*.sql 2>/dev/null | tail -n 1 || true)" [[ -n "${NC_DB_NAME:-}" && -n "$nc_sql" ]] && mysql_restore_sql "${NC_DB_CNF}" "${NC_DB_NAME}" "$nc_sql" || echo "WARN: NC DB dump missing"
if [[ -n "$wp_sql" ]]; then [[ "${ENABLE_GITEA:-false}" == "true" && -n "${GITEA_DB_NAME:-}" && -n "$g_sql" ]] && mysql_restore_sql "${GITEA_DB_CNF}" "${GITEA_DB_NAME}" "$g_sql" || true
mysql_restore_sql "${WP_DB_CNF}" "${WP_DB_NAME}" "$wp_sql"
else else
echo "WARN: No WordPress SQL dump found in archive." echo "-- RESTORE_DB=false or no db dump present (skipping)"
fi
fi fi
if [[ -n "${NC_DB_NAME:-}" ]]; then if [[ "${ENABLE_NEXTCLOUD:-false}" == "true" && -d "${NC_DIR}" && -f "${NC_DIR}/occ" ]]; then
nc_sql="$(ls -1 "$STAGING_DIR/db"/nextcloud_*.sql 2>/dev/null | tail -n 1 || true)"
if [[ -n "$nc_sql" ]]; then
mysql_restore_sql "${NC_DB_CNF}" "${NC_DB_NAME}" "$nc_sql"
else
echo "WARN: No Nextcloud SQL dump found in archive."
fi
fi
else
echo "-- RESTORE_DB=false (skipping DB restore)"
fi
# ---------- Post-restore Nextcloud steps ----------
if [[ "${ENABLE_NEXTCLOUD}" == "true" && -d "${NC_DIR}" && -f "${NC_DIR}/occ" ]]; then
echo "-- Nextcloud post-restore: maintenance:repair" echo "-- Nextcloud post-restore: maintenance:repair"
run_cmd sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:repair || true run_cmd sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:repair || true
echo "-- Nextcloud post-restore: files:scan (optional, can be slow)"
if [[ "${NC_FILES_SCAN_AFTER_RESTORE}" == "true" ]]; then if [[ "${NC_FILES_SCAN_AFTER_RESTORE}" == "true" ]]; then
echo "-- Nextcloud post-restore: files:scan --all"
run_cmd sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" files:scan --all || true run_cmd sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" files:scan --all || true
else
echo "-- Skipping files:scan (set NC_FILES_SCAN_AFTER_RESTORE=true to enable)"
fi fi
if [[ "${ENABLE_NEXTCLOUD_MAINTENANCE}" == "true" ]]; then if [[ "${ENABLE_NEXTCLOUD_MAINTENANCE}" == "true" ]]; then
@@ -273,6 +240,10 @@ if [[ "${ENABLE_NEXTCLOUD}" == "true" && -d "${NC_DIR}" && -f "${NC_DIR}/occ" ]]
fi fi
fi fi
gitea_start
phpfpm_start
httpd_start
echo "== app-restore done: ${ts} ==" echo "== app-restore done: ${ts} =="
echo "-- Extracted staging kept at: ${STAGING_DIR}" echo "-- Working dir: ${RUN_DIR}"
echo "-- Log: ${LOG_FILE}" echo "-- Log: ${LOG_FILE}"