Compare commits

...

2 Commits

4 changed files with 753 additions and 596 deletions

403
README.md
View File

@@ -1,113 +1,340 @@
# app-backup (Rocky Linux / Apache / Postfix)
# app-backup (Rocky Linux 10) Split-Backups für WordPress, Nextcloud, Gitea + DBs (OneDrive/rclone)
Resiliente tägliche Backups für **WordPress**, **Nextcloud** und **Mail** (Postfix/Dovecot optional).
Erzeugt **timestamped Archive**, lädt nach **OneDrive** via **rclone** und verschickt einen **Mail-Report** an den lokalen User **johannes**.
Dieses Repo enthält ein **Backup- und Restore-Setup** für typische Self-Hosted-Apps auf Rocky Linux 10:
Enthält zusätzlich ein Restore-Script mit **Dry-Run**, **Service-Stop/Start** und **Nextcloud Maintenance Mode**.
- **WordPress** (Webroot)
- **Nextcloud** (Code + Data getrennt; Data liegt bei uns unter `.../nextcloud/data`)
- **Gitea** (native Installation via systemd + MariaDB)
- **MariaDB-Dumps** (WordPress DB, Nextcloud DB, Gitea DB)
- Upload nach **OneDrive** per **rclone**
## Installation
Neu in dieser Version: **kein riesiges “Alles-in-eins”-Archiv** mehr, sondern **sauber getrennte Archives pro Komponente**. Das macht Uploads stabiler (große Dateien), Restore selektiver und Fehler leichter eingrenzbar.
Pakete:
```bash
sudo dnf install -y rsync tar zstd gzip rclone mariadb postfix
---
## ✅ Ziele / Designentscheidungen
### 1) Split statt „dickes Archiv“
Es werden pro Lauf mehrere Archive erzeugt (zstd oder gzip):
- `..._meta.tar.zst` Meta (Timestamp, Hostname, Größeninfos)
- `..._db.tar.zst` DB Dumps (SQL Dateien)
- `..._wordpress.tar.zst` WordPress Webdaten (ohne Nextcloud-Unterordner)
- `..._nextcloud.tar.zst` Nextcloud **Code/Web** (mit Exclude `data/`)
- `..._nextcloud-data.tar.zst` Nextcloud **Data** separat
- `..._gitea.tar.zst` Gitea Data (APP_DATA_PATH)
- `..._gitea-etc.tar.zst` `/etc/gitea` (Gitea config)
Vorteile:
- Upload stabiler bei **1218 GB** (nicht 1 Monolith)
- Restore selektiv (z.B. nur DB, nur Nextcloud-data, …)
- Fehlerbehebung leichter (ein Archiv kaputt → nicht alles kaputt)
### 2) Nextcloud data unterhalb von `NC_DIR`
Bei uns liegt Nextcloud so:
- `NC_DIR=/var/www/html/nextcloud` (Code)
- `NC_DATA_DIR=/var/www/html/nextcloud/data` (Data)
Damit **Data nicht doppelt** gesichert wird:
- Beim Backup von `NC_DIR` wird `data/` **explizit ausgeschlossen** (`rsync --exclude data/`).
- Danach wird `NC_DATA_DIR` als eigenes Paket gesichert.
### 3) WordPress liegt im Webroot `/var/www/html` und Nextcloud als Unterordner
Bei uns liegt:
- WordPress im `WP_DIR=/var/www/html`
- Nextcloud in `/var/www/html/nextcloud`
Damit Nextcloud nicht im WP-Archiv landet:
- WP-Backup schließt `nextcloud/` automatisch aus, wenn `NC_DIR` genau `WP_DIR/nextcloud` ist.
### 4) rclone / OneDrive robust für große Dateien
Upload wird robust gemacht mit:
- Chunking: `--onedrive-chunk-size 64M`
- Timeouts: `--timeout 1h`, `--contimeout 30s`
- konservativ: `--transfers 2`, `--checkers 4`
- mehr Retries: `--retries 10`, `--low-level-retries 40`
Außerdem: **Remote-Namen sind case-sensitive!**
Remote ist bei uns z.B. `OneDrive:` (nicht `onedrive:`).
---
## 📦 Dateien & Pfade
### Skripte
- `app-backup.sh` → Backup erstellen, Archive erzeugen, optional Upload.
- `app-restore.sh` → Restore aus lokalem Ordner oder OneDrive-Run-Folder.
- `app-backup.conf` → Konfiguration (Pfad/Services/DB/Credentials).
### Zielpfade (Standard)
- Workdir: `/var/backups/app-backup`
- Archive: `/var/backups/app-backup/archives`
- Logs: `/var/log/app-backup`
### OneDrive Ziel (Beispiel)
Upload in:
```
OneDrive:Sicherung/JRITServerBackups/<hostname>/appbackup_<timestamp>/
```
Repo deploy:
---
## 🔧 Installation / Setup
### 1) Skripte installieren
Beispiel:
```bash
./install.sh
sudo install -d /etc/app-backup /usr/local/sbin
sudo install -m 0755 app-backup.sh /usr/local/sbin/app-backup.sh
sudo install -m 0755 app-restore.sh /usr/local/sbin/app-restore.sh
sudo install -m 0640 app-backup.conf /etc/app-backup/app-backup.conf
```
## Konfiguration
### 2) DB Credential Files anlegen
Für WordPress / Nextcloud / Gitea je eine `.cnf`.
Konfig anpassen:
- `/etc/app-backup/app-backup.conf`
Beispiele:
- `/etc/app-backup/db-wordpress.cnf`
- `/etc/app-backup/db-nextcloud.cnf`
- `/etc/app-backup/db-gitea.cnf`
DB-Credentials (aus Templates):
Inhalt:
```ini
[client]
user=backup
password=DEIN_PASSWORT
host=localhost
```
Rechte:
```bash
sudo cp /etc/app-backup/db-wordpress.cnf.example /etc/app-backup/db-wordpress.cnf
sudo cp /etc/app-backup/db-nextcloud.cnf.example /etc/app-backup/db-nextcloud.cnf
sudo chmod 600 /etc/app-backup/db-*.cnf
sudo chown root:root /etc/app-backup/db-*.cnf
sudo chmod 600 /etc/app-backup/db-*.cnf
```
rclone für root testen:
### 3) MariaDB Backup-User (Minimal-Rechte)
Pro DB (z.B. gitea):
```sql
CREATE USER 'backup'@'localhost' IDENTIFIED BY 'DEIN_PASSWORT';
GRANT SELECT, SHOW VIEW, TRIGGER, EVENT, LOCK TABLES ON gitea.* TO 'backup'@'localhost';
FLUSH PRIVILEGES;
```
Optional: denselben User für wordpress/nextcloud verwenden, dann jeweils `GRANT ... ON wordpress.*` etc.
### 4) rclone Remote prüfen (Case-Sensitive!)
Auf dem Server (als derselbe User, der später rclone nutzt typischerweise root oder johannes):
```bash
rclone listremotes
rclone lsf "OneDrive:" --max-depth 1
rclone lsf "OneDrive:Sicherung" --max-depth 1
```
⚠️ Wichtig: Wenn das Backup als `root` läuft, muss `root` auch Zugriff auf die rclone config haben.
Typisch sind zwei Wege:
**Option A root nutzt eigene rclone config**
- `sudo rclone config` als root durchführen und Remote anlegen.
**Option B root nutzt Config von johannes**
- im Script/Service `RCLONE_CONFIG=/home/johannes/.config/rclone/rclone.conf` setzen
- oder in systemd Unit `Environment=RCLONE_CONFIG=...`
---
## ⚙️ Konfiguration: app-backup.conf (wichtige Stellen)
### OneDrive Remote Base
```bash
RCLONE_REMOTE_BASE="OneDrive:Sicherung/JRITServerBackups/$(hostname -s)"
```
Das Skript legt pro Run einen Unterordner an:
```bash
remote_run="${RCLONE_REMOTE_BASE}/appbackup_<timestamp>"
```
### WordPress / Nextcloud Layout (wichtig!)
```bash
WP_DIR="/var/www/html"
NC_DIR="/var/www/html/nextcloud"
NC_DATA_DIR="/var/www/html/nextcloud/data"
```
Damit wird:
- WordPress = alles in `/var/www/html` **ohne** `nextcloud/`
- Nextcloud Code = `/var/www/html/nextcloud` **ohne** `data/`
- Nextcloud Data = `/var/www/html/nextcloud/data` separat
### Gitea (native + MariaDB)
Aus deinem systemd/app.ini:
- `WorkingDirectory=/var/lib/gitea`
- `APP_DATA_PATH=/var/lib/gitea/data`
- config: `/etc/gitea/app.ini`
Daher:
```bash
ENABLE_GITEA="true"
GITEA_SERVICE_NAME="gitea"
ENABLE_GITEA_SERVICE_STOP="true"
GITEA_DATA_DIR="/var/lib/gitea/data"
GITEA_ETC_DIR="/etc/gitea"
GITEA_DB_NAME="gitea"
GITEA_DB_CNF="/etc/app-backup/db-gitea.cnf"
```
---
## ▶️ Backup ausführen
```bash
sudo /usr/local/sbin/app-backup.sh
```
Erwartetes Verhalten:
1) Lock `/run/app-backup.lock` verhindert Parallelruns
2) optional stoppt es Gitea kurz (konsistenter Snapshot)
3) Dumps: wordpress / nextcloud / gitea
4) rsync in staging
5) mehrere `.tar.zst` werden erzeugt
6) Upload pro Datei nach OneDrive (falls `ENABLE_UPLOAD=true`)
7) Remote retention (best effort)
8) lokale Retention löscht alte Archive
---
## ♻️ Restore ausführen
### Restore von OneDrive (Run-Ordnername angeben)
Du brauchst den Ordnernamen, z.B.:
`appbackup_2026-02-11_02-31-28`
Dann:
```bash
sudo /usr/local/sbin/app-restore.sh --remote-run appbackup_2026-02-11_02-31-28
```
### Restore aus lokalem Ordner
Wenn du die Archive lokal hast:
```bash
sudo /usr/local/sbin/app-restore.sh --local-run /var/backups/app-backup/archives/<run-folder>
```
### Dry-Run
```bash
sudo /usr/local/sbin/app-restore.sh --remote-run appbackup_... --dry-run
```
### Nur Files / nur DB
```bash
sudo /usr/local/sbin/app-restore.sh --remote-run appbackup_... --no-db
sudo /usr/local/sbin/app-restore.sh --remote-run appbackup_... --no-files
```
---
## 🧠 Restore-Details (was passiert danach?)
### Nextcloud
- Maintenance Mode wird optional aktiviert (konfigurierbar)
- Restore kopiert Code und Data getrennt zurück
- Danach:
- `occ maintenance:repair`
- optional `occ files:scan --all` (deaktiviert per default; kann lange dauern)
### Gitea
- Service wird optional gestoppt
- Data wird nach `GITEA_DATA_DIR` zurück synchronisiert
- `/etc/gitea` wird wiederhergestellt
- DB Dump wird importiert
- Service wird wieder gestartet
---
## 🧯 Troubleshooting
### 1) rclone Remote not reachable / “didn't find section in config file”
Typischer Fehler:
- Remote heißt `OneDrive:` aber Config nutzt `onedrive:`**case-sensitive**.
- Backup läuft als `root`, aber rclone config ist nur bei `johannes`.
Prüfen:
```bash
sudo rclone listremotes
sudo rclone lsd onedrive:
sudo -u johannes rclone listremotes
```
## Backup Timer
Fix:
- Entweder rclone remote auch für root konfigurieren,
- oder `RCLONE_CONFIG` auf johannes Config zeigen lassen.
Timer aktivieren (macht install.sh bereits):
### 2) Nextcloud Data doppelt oder fehlt
Stellen prüfen:
- `NC_DIR` korrekt?
- `NC_DATA_DIR` korrekt?
- Das Skript excludet `data/` bei Nextcloud Code. Data wird separat gesichert.
### 3) Große Uploads brechen ab
Setze z.B.:
- `RCLONE_TRANSFERS=1`
- `RCLONE_CHECKERS=2`
- `RCLONE_ONEDRIVE_CHUNK_SIZE=32M`
- optional `RCLONE_BWLIMIT="20M"`
### 4) Restore soll “hart” spiegeln
Standard ist vorsichtig (kein delete). Wenn du wirklich Ziel-Verzeichnisse exakt spiegeln willst:
```bash
RESTORE_STRICT_DELETE="true"
```
⚠️ Achtung: Das kann Dateien löschen, die nicht im Backup sind.
---
## 🗓️ Optional: systemd Service + Timer (Beispiel)
> Nur als Beispiel wenn ihr mögt, legen wir das als Files ins Repo.
`/etc/systemd/system/app-backup.service`
```ini
[Unit]
Description=app-backup
After=network.target mariadb.service
Wants=mariadb.service
[Service]
Type=oneshot
ExecStart=/usr/local/sbin/app-backup.sh
```
`/etc/systemd/system/app-backup.timer`
```ini
[Unit]
Description=Run app-backup daily
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
```
Aktivieren:
```bash
sudo systemctl daemon-reload
sudo systemctl enable --now app-backup.timer
sudo systemctl list-timers | grep app-backup
```
Manueller Testlauf:
```bash
sudo systemctl start app-backup.service
journalctl -u app-backup.service -n 200 --no-pager
ls -la /var/log/app-backup/
ls -la /var/backups/app-backup/archives/
```
---
## Disk-Schutz / Retention (wichtig)
Um ein volllaufendes Dateisystem zu vermeiden, gilt:
- **Lokale Archive werden maximal `LOCAL_RETENTION_DAYS` Tage aufbewahrt** (Standard: **7 Tage**)
- Zusätzlich wird vor und nach dem Staging geprüft, ob mindestens **`MIN_FREE_GB` GiB** frei sind (Standard: **10 GiB**).
- Gelöschte Backups werden im **Mail-Report** aufgeführt (Anzahl + grob freigegebener Speicher).
Einstellungen in `/etc/app-backup/app-backup.conf`:
- `LOCAL_RETENTION_DAYS=7`
- `MIN_FREE_GB=10`
## Mail-Report
Versand per Postfix `/usr/sbin/sendmail` an `MAIL_TO` (Default: `johannes`).
Der Report enthält u.a.:
- Status (SUCCESS/FAIL), Laufzeit
- Was gesichert wurde + Größen
- Upload-Status (rclone)
- **Wie viele alte lokale Backups gelöscht wurden (Retention)**
## Restore (Wiederherstellung)
### Dry-Run (empfohlen)
```bash
sudo /usr/local/sbin/app-restore.sh --archive /var/backups/app-backup/archives/appbackup_YYYY-mm-dd_HH-MM-SS.tar.zst --dry-run
```
### Restore aus lokalem Archiv
```bash
sudo /usr/local/sbin/app-restore.sh --archive /var/backups/app-backup/archives/appbackup_YYYY-mm-dd_HH-MM-SS.tar.zst
```
### Restore direkt aus OneDrive (Download + Restore)
```bash
sudo /usr/local/sbin/app-restore.sh --remote-file appbackup_YYYY-mm-dd_HH-MM-SS.tar.zst
```
### Teil-Restore
Nur Nextcloud + DB:
```bash
sudo /usr/local/sbin/app-restore.sh --archive /path/to/archive.tar.zst --only nextcloud,nextcloud-data,db
```
Alles außer Mail:
```bash
sudo /usr/local/sbin/app-restore.sh --archive /path/to/archive.tar.zst --skip mail
```
### Nach dem Restore prüfen
```bash
systemctl status httpd postfix dovecot --no-pager
tail -n 200 /var/log/app-backup/app-restore_*.log
```
Optional (kann dauern):
```bash
sudo -u apache php /var/www/html/nextcloud/occ maintenance:repair
```
## ✅ Empfehlung: Restore-Test
Backups sind erst dann gut, wenn Restore getestet ist:
- Test-VM / Test-Host
- DB importieren
- Archive entpacken / rsync zurückspielen
- Nextcloud startet, Login klappt, Files sichtbar
- WordPress Frontend/Backend ok
- Gitea WebUI ok, Repos vorhanden

View File

@@ -1,51 +1,83 @@
# What to back up
ENABLE_WORDPRESS=true
ENABLE_NEXTCLOUD=true
ENABLE_NEXTCLOUD_DATA=true
ENABLE_MAIL=true
# app-backup.conf
# -----------------------------------------------------------------------------
# Konfiguration für app-backup.sh und app-restore.sh (Split-Archive + tuned rclone)
# -----------------------------------------------------------------------------
# Database dumps
ENABLE_DB_DUMPS=true
ENABLE_NEXTCLOUD_MAINTENANCE=true
# Paths (adjust if needed)
WP_DIR="/var/www/html/wordpress"
NC_DIR="/var/www/html/nextcloud"
NC_DATA_DIR="/var/www/nextcloud-data"
NC_OCC_USER="apache"
# Mail paths (adjust if needed)
MAIL_DIR="/var/vmail"
POSTFIX_DIR="/etc/postfix"
DOVECOT_DIR="/etc/dovecot"
# DB names (adjust)
WP_DB_NAME="wordpress"
NC_DB_NAME="nextcloud"
# DB credentials files (create from examples, chmod 600, root:root)
WP_DB_CNF="/etc/app-backup/db-wordpress.cnf"
NC_DB_CNF="/etc/app-backup/db-nextcloud.cnf"
# Working dirs
WORKDIR="/var/backups/app-backup"
STAGING_ROOT="${WORKDIR}/staging"
ARCHIVE_DIR="${WORKDIR}/archives"
ARCHIVE_PREFIX="appbackup"
# Disk protection / retention
LOCAL_RETENTION_DAYS=7
MIN_FREE_GB=10
COMPRESSOR="zstd" # zstd|gzip
# Compression
COMPRESSOR="zstd"
# rclone / OneDrive (case-sensitive Remote-Name!)
RCLONE_BIN="rclone"
RCLONE_REMOTE_BASE="OneDrive:Sicherung/JRITServerBackups/$(hostname -s)"
ENABLE_UPLOAD="true"
# rclone destination
RCLONE_REMOTE="onedrive:Sicherung"
# Large file robustness
RCLONE_ONEDRIVE_CHUNK_SIZE="64M"
RCLONE_TIMEOUT="1h"
RCLONE_CONTIMEOUT="30s"
RCLONE_TRANSFERS="2"
RCLONE_CHECKERS="4"
# remote retention
ENABLE_REMOTE_RETENTION=true
# Retry strategy
RCLONE_RETRIES=10
RCLONE_LOW_LEVEL_RETRIES=40
RCLONE_RETRIES_SLEEP="30s"
RCLONE_STATS="1m"
RCLONE_BWLIMIT="0" # e.g. "20M"
ENABLE_REMOTE_RETENTION="true"
REMOTE_RETENTION_DAYS=30
# mail reporting via postfix/sendmail
ENABLE_MAIL_REPORT=true
MIN_FREE_GB=12
NICE_LEVEL=10
IONICE_CLASS=2
IONICE_LEVEL=6
ENABLE_MAIL_REPORT="true"
MAIL_TO="johannes"
MAIL_FROM="app-backup@$(hostname -f 2>/dev/null || hostname)"
MAIL_SUBJECT_PREFIX="[app-backup]"
MAIL_INCLUDE_LOG_TAIL_LINES=200
# Components
ENABLE_WORDPRESS="true"
ENABLE_NEXTCLOUD="true"
ENABLE_NEXTCLOUD_DATA="true"
ENABLE_NEXTCLOUD_MAINTENANCE="true"
ENABLE_DB_DUMPS="true"
# WordPress
WP_DIR="/var/www/html"
WP_DB_NAME="wordpress"
WP_DB_CNF="/etc/app-backup/db-wordpress.cnf"
# Nextcloud
NC_DIR="/var/www/html/nextcloud"
NC_OCC_USER="apache"
NC_DB_NAME="nextcloud"
NC_DB_CNF="/etc/app-backup/db-nextcloud.cnf"
# Nextcloud data UNDER NC_DIR
NC_DATA_DIR="/var/www/html/nextcloud/data"
NC_FILES_SCAN_AFTER_RESTORE="false"
# Gitea (native install)
ENABLE_GITEA="true"
GITEA_SERVICE_NAME="gitea"
ENABLE_GITEA_SERVICE_STOP="true"
# According to your /etc/gitea/app.ini:
# APP_DATA_PATH = /var/lib/gitea/data
GITEA_DATA_DIR="/var/lib/gitea/data"
GITEA_ETC_DIR="/etc/gitea"
# Gitea MariaDB
GITEA_DB_NAME="gitea"
GITEA_DB_CNF="/etc/app-backup/db-gitea.cnf"
# Restore behavior
RESTORE_STRICT_DELETE="false"

View File

@@ -2,13 +2,19 @@
set -Eeuo pipefail
umask 027
# ==============================================================================
# app-backup.sh
# - Separate archives per component (db / wordpress / nextcloud-code / nextcloud-data / gitea)
# - rclone tuned for large files (OneDrive chunking + timeouts + conservative concurrency)
# - Nextcloud "data" excluded from code backup (layout: ${NC_DIR}/data)
# - Gitea native install (systemd), data path configurable (default: /var/lib/gitea/data)
# ==============================================================================
# ---------- Logging ----------
LOG_DIR="/var/log/app-backup"
mkdir -p "$LOG_DIR"
ts="$(date '+%Y-%m-%d_%H-%M-%S')"
LOG_FILE="${LOG_DIR}/app-backup_${ts}.log"
# Log to file + journald
exec > >(tee -a "$LOG_FILE" | systemd-cat -t app-backup -p info) 2>&1
# ---------- Config ----------
@@ -28,113 +34,76 @@ fi
: "${LOCAL_RETENTION_DAYS:=7}"
: "${COMPRESSOR:=zstd}" # zstd|gzip
: "${RCLONE_REMOTE:=onedrive:Sicherung}"
: "${ARCHIVE_PREFIX:=appbackup}" # file prefix
# rclone
: "${RCLONE_BIN:=rclone}"
: "${RCLONE_RETRIES:=6}"
: "${RCLONE_LOW_LEVEL_RETRIES:=20}"
: "${RCLONE_REMOTE_BASE:=OneDrive:Sicherung/JRITServerBackups/$(hostname -s)}" # remote folder
: "${RCLONE_RETRIES:=10}"
: "${RCLONE_LOW_LEVEL_RETRIES:=40}"
: "${RCLONE_RETRIES_SLEEP:=30s}"
: "${RCLONE_STATS:=1m}"
: "${RCLONE_BWLIMIT:=0}" # "0" = no limit
: "${ENABLE_UPLOAD:=true}"
# large-file stability (OneDrive)
: "${RCLONE_ONEDRIVE_CHUNK_SIZE:=64M}"
: "${RCLONE_TIMEOUT:=1h}"
: "${RCLONE_CONTIMEOUT:=30s}"
: "${RCLONE_TRANSFERS:=2}"
: "${RCLONE_CHECKERS:=4}"
: "${REMOTE_RETENTION_DAYS:=30}"
: "${ENABLE_REMOTE_RETENTION:=true}"
# Disk-space safety: minimum free space required on the filesystem holding WORKDIR
: "${MIN_FREE_GB:=10}"
# Disk-space safety
: "${MIN_FREE_GB:=12}"
# Process niceness
: "${NICE_LEVEL:=10}"
: "${IONICE_CLASS:=2}" # best-effort
: "${IONICE_CLASS:=2}"
: "${IONICE_LEVEL:=6}"
# Mail reporting (local postfix via sendmail)
# Mail reporting
: "${ENABLE_MAIL_REPORT:=true}"
: "${MAIL_TO:=johannes}" # local mailbox
: "${MAIL_TO:=johannes}"
: "${MAIL_FROM:=app-backup@$(hostname -f 2>/dev/null || hostname)}"
: "${MAIL_SUBJECT_PREFIX:=[app-backup]}"
: "${MAIL_INCLUDE_LOG_TAIL_LINES:=200}"
# OPTIONAL: allow wp excludes via config, e.g. WP_EXCLUDES=("nextcloud/" "foo/")
# If unset, we compute a safe default for your setup.
: "${WP_EXCLUDES_MODE:=auto}" # auto|manual
# ---------- State for report ----------
START_EPOCH="$(date +%s)"
STATUS="SUCCESS"
ERROR_SUMMARY=""
RCLONE_STATUS="SKIPPED"
RCLONE_OUTPUT_FILE=""
ARCHIVE_FILE=""
SIZES_FILE=""
DELETED_LOCAL_COUNT=0
DELETED_LOCAL_BYTES=0
DELETED_LOCAL_LIST_FILE=""
# ---------- Helpers ----------
die() { echo "ERROR: $*"; exit 1; }
have() { command -v "$1" >/dev/null 2>&1; }
human_bytes() {
local b="${1:-0}"
if have numfmt; then numfmt --to=iec-i --suffix=B "$b"; else echo "${b}B"; fi
}
human_bytes() { local b="${1:-0}"; if have numfmt; then numfmt --to=iec-i --suffix=B "$b"; else echo "${b}B"; fi; }
bytes_of_path() { local p="$1"; [[ -e "$p" ]] && (du -sb "$p" 2>/dev/null | awk '{print $1}' || du -sB1 "$p" | awk '{print $1}') || echo 0; }
free_bytes_workdir_fs() { df -PB1 "$WORKDIR" | awk 'NR==2{print $4}'; }
bytes_of_path() {
local p="$1"
if [[ -e "$p" ]]; then
du -sb "$p" 2>/dev/null | awk '{print $1}' || du -sB1 "$p" | awk '{print $1}'
else
echo 0
fi
}
free_bytes_workdir_fs() {
# available bytes on filesystem that contains WORKDIR
df -PB1 "$WORKDIR" | awk 'NR==2{print $4}'
ensure_min_free_space() {
local min_bytes=$((MIN_FREE_GB * 1024 * 1024 * 1024))
local avail; avail="$(free_bytes_workdir_fs)"
echo "-- Free space on WORKDIR filesystem: $(human_bytes "$avail") (min required: ${MIN_FREE_GB}GiB)"
[[ "$avail" -ge "$min_bytes" ]] || die "Not enough free space on WORKDIR filesystem (need >= ${MIN_FREE_GB}GiB)."
}
cleanup_old_local_archives() {
mkdir -p "$ARCHIVE_DIR"
local min_age_days="$LOCAL_RETENTION_DAYS"
local list_file="${WORKDIR}/deleted_local_archives_${ts}.txt"
DELETED_LOCAL_LIST_FILE="$list_file"
: > "$list_file"
while IFS= read -r -d '' f; do
local sz
sz="$(stat -c '%s' "$f" 2>/dev/null || echo 0)"
DELETED_LOCAL_BYTES=$((DELETED_LOCAL_BYTES + sz))
DELETED_LOCAL_COUNT=$((DELETED_LOCAL_COUNT + 1))
echo "$f" >> "$list_file"
done < <(find "$ARCHIVE_DIR" -type f -name 'appbackup_*.tar.*' -mtime "+${min_age_days}" -print0 2>/dev/null || true)
if [[ "$DELETED_LOCAL_COUNT" -gt 0 ]]; then
echo "-- Local retention: deleting ${DELETED_LOCAL_COUNT} archive(s) older than ${LOCAL_RETENTION_DAYS} day(s) from ${ARCHIVE_DIR}"
while IFS= read -r f; do
rm -f -- "$f" || true
done < "$list_file"
echo "-- Local retention: freed approx $(human_bytes "$DELETED_LOCAL_BYTES")"
else
echo "-- Local retention: nothing to delete (keep last ${LOCAL_RETENTION_DAYS} day(s))"
fi
}
ensure_min_free_space() {
local min_bytes=$((MIN_FREE_GB * 1024 * 1024 * 1024))
local avail
avail="$(free_bytes_workdir_fs)"
echo "-- Free space on WORKDIR filesystem: $(human_bytes "$avail") (min required: ${MIN_FREE_GB}GiB)"
if [[ "$avail" -lt "$min_bytes" ]]; then
die "Not enough free space on WORKDIR filesystem (need >= ${MIN_FREE_GB}GiB). Aborting to prevent disk full."
fi
echo "-- Local retention: deleting archives older than ${LOCAL_RETENTION_DAYS} day(s) from ${ARCHIVE_DIR}"
find "$ARCHIVE_DIR" -type f -name "${ARCHIVE_PREFIX}_*.tar.*" -mtime "+${LOCAL_RETENTION_DAYS}" -print -delete 2>/dev/null || true
}
send_report_mail() {
[[ "${ENABLE_MAIL_REPORT}" == "true" ]] || return 0
local SENDMAIL_BIN="/usr/sbin/sendmail"
if [[ ! -x "$SENDMAIL_BIN" ]]; then
echo "WARN: sendmail binary not found/executable at $SENDMAIL_BIN - cannot send report mail"
return 0
fi
[[ -x "$SENDMAIL_BIN" ]] || { echo "WARN: sendmail missing at $SENDMAIL_BIN"; return 0; }
local end_epoch now duration subject host
end_epoch="$(date +%s)"
@@ -143,18 +112,6 @@ send_report_mail() {
host="$(hostname -f 2>/dev/null || hostname)"
subject="${MAIL_SUBJECT_PREFIX} ${STATUS} ${host} ${ts}"
local recs
recs=$(
cat <<'EOF'
Empfehlungen (mehr Resilienz):
- Regelmäßig Restore testen (Stichprobe): DB-Import + Dateien entpacken + App-Start prüfen.
- Verschlüsselung: Archiv zusätzlich clientseitig verschlüsseln (z.B. age/gpg) bevor Upload.
- Immutable/Versioned Backup-Ziel (wenn möglich): Schutz vor Ransomware/Löschung.
- Monitoring/Alerting: systemd unit failure => Benachrichtigung.
- Snapshots (LVM/Btrfs/ZFS): Bei großen Datenmengen besser als rsync; reduziert Downtime für Nextcloud.
EOF
)
{
echo "From: ${MAIL_FROM}"
echo "To: ${MAIL_TO}"
@@ -171,69 +128,30 @@ EOF
[[ -n "${ERROR_SUMMARY}" ]] && echo "Fehler: ${ERROR_SUMMARY}"
echo "Dauer: ${duration}s"
echo
echo "Konfiguration"
echo "------------"
echo "Config: ${CONFIG_FILE}"
echo "Log: ${LOG_FILE}"
echo "Workdir: ${WORKDIR}"
echo "Archive dir: ${ARCHIVE_DIR}"
echo "Kompression: ${COMPRESSOR}"
echo
echo "Disk / Retention"
echo "---------------"
echo "Min. freier Speicher: ${MIN_FREE_GB}GiB"
echo "Lokale Aufbewahrung: ${LOCAL_RETENTION_DAYS} Tage"
echo "Gelöschte lokale Backups: ${DELETED_LOCAL_COUNT} (ca. $(human_bytes "${DELETED_LOCAL_BYTES}"))"
if [[ -n "${DELETED_LOCAL_LIST_FILE}" && -s "${DELETED_LOCAL_LIST_FILE}" ]]; then
echo "Gelöschte Dateien (erste 20):"
head -n 20 "${DELETED_LOCAL_LIST_FILE}" || true
fi
if [[ "${ENABLE_REMOTE_RETENTION}" == "true" ]]; then
echo "Remote Aufbewahrung: ${REMOTE_RETENTION_DAYS} Tage (rclone delete --min-age)"
else
echo "Remote Aufbewahrung: deaktiviert"
fi
echo
echo "Was wurde gesichert?"
echo "-------------------"
[[ "${ENABLE_WORDPRESS}" == "true" ]] && echo "- WordPress Dateien: ${WP_DIR}" || true
[[ "${ENABLE_DB_DUMPS}" == "true" && -n "${WP_DB_NAME:-}" ]] && echo "- WordPress DB: ${WP_DB_NAME}" || true
[[ "${ENABLE_NEXTCLOUD}" == "true" ]] && echo "- Nextcloud Dateien: ${NC_DIR}" || true
[[ "${ENABLE_NEXTCLOUD_DATA}" == "true" ]] && echo "- Nextcloud Data: ${NC_DATA_DIR}" || true
[[ "${ENABLE_DB_DUMPS}" == "true" && -n "${NC_DB_NAME:-}" ]] && echo "- Nextcloud DB: ${NC_DB_NAME}" || true
[[ "${ENABLE_MAIL}" == "true" ]] && echo "- Mail: ${MAIL_DIR:-<unset>} + ${POSTFIX_DIR:-<unset>} + ${DOVECOT_DIR:-<unset>}" || true
echo "Remote"
echo "------"
echo "Upload: ${ENABLE_UPLOAD}"
echo "Remote base: ${RCLONE_REMOTE_BASE}"
echo "Upload Status: ${RCLONE_STATUS}"
[[ -n "${RCLONE_OUTPUT_FILE}" && -f "${RCLONE_OUTPUT_FILE}" ]] && { echo; echo "rclone Tail:"; tail -n 60 "${RCLONE_OUTPUT_FILE}" || true; }
echo
echo "Größen"
echo "------"
if [[ -n "${SIZES_FILE}" && -f "${SIZES_FILE}" ]]; then
cat "${SIZES_FILE}"
else
echo "(keine Größeninfos verfügbar)"
fi
echo
echo "Upload (rclone)"
echo "--------------"
echo "Remote: ${RCLONE_REMOTE}"
echo "Upload Status: ${RCLONE_STATUS}"
if [[ -n "${RCLONE_OUTPUT_FILE}" && -f "${RCLONE_OUTPUT_FILE}" ]]; then
echo
echo "rclone Output (Tail):"
tail -n 50 "${RCLONE_OUTPUT_FILE}" || true
fi
echo
echo "${recs}"
[[ -n "${SIZES_FILE}" && -f "${SIZES_FILE}" ]] && cat "${SIZES_FILE}" || echo "(keine Größeninfos verfügbar)"
echo
echo "Log-Auszug (Tail)"
echo "-----------------"
tail -n "${MAIL_INCLUDE_LOG_TAIL_LINES}" "${LOG_FILE}" || true
} | "$SENDMAIL_BIN" -t || echo "WARN: sending mail failed (postfix queue may still accept it)"
} | "$SENDMAIL_BIN" -t || echo "WARN: sending mail failed"
}
cleanup_staging() {
if [[ -n "${STAGING_DIR:-}" && -d "${STAGING_DIR:-}" ]]; then
rm -rf "${STAGING_DIR:?}"
fi
}
cleanup_staging() { [[ -n "${STAGING_DIR:-}" && -d "${STAGING_DIR:-}" ]] && rm -rf "${STAGING_DIR:?}"; }
# Nextcloud maintenance-mode safety trap
NC_MAINTENANCE_ON=false
@@ -245,21 +163,18 @@ nc_maintenance_off() {
fi
}
on_error() {
local exit_code=$?
STATUS="FAIL"
ERROR_SUMMARY="Exit code ${exit_code} (see log)"
return 0
}
on_exit() {
local exit_code=$?
send_report_mail
nc_maintenance_off
cleanup_staging
exit "${exit_code}"
# Gitea service safety trap
GITEA_WAS_STOPPED=false
gitea_service_start() {
if [[ "${GITEA_WAS_STOPPED}" == "true" ]]; then
echo "-- Starting Gitea service (trap)..."
systemctl start "${GITEA_SERVICE_NAME}" || true
GITEA_WAS_STOPPED=false
fi
}
on_error() { local ec=$?; STATUS="FAIL"; ERROR_SUMMARY="Exit code ${ec} (see log)"; return 0; }
on_exit() { local ec=$?; send_report_mail; nc_maintenance_off; gitea_service_start; cleanup_staging; exit "${ec}"; }
trap on_error ERR
trap on_exit EXIT
@@ -271,10 +186,9 @@ echo "-- Log: ${LOG_FILE}"
# ---------- Preconditions ----------
[[ $EUID -eq 0 ]] || die "Must run as root."
mkdir -p "$WORKDIR" "$ARCHIVE_DIR" "$STAGING_ROOT" "$LOG_DIR"
# ---------- Locking (prevents parallel runs) ----------
# ---------- Locking ----------
LOCKFILE="/run/app-backup.lock"
exec 9>"$LOCKFILE"
if ! flock -n 9; then
@@ -284,231 +198,244 @@ if ! flock -n 9; then
fi
# ---------- Tools ----------
for t in tar rsync flock df find stat; do
have "$t" || die "Missing required tool: $t"
done
if [[ "$COMPRESSOR" == "zstd" ]]; then
have zstd || die "COMPRESSOR=zstd but zstd is missing"
elif [[ "$COMPRESSOR" == "gzip" ]]; then
have gzip || die "COMPRESSOR=gzip but gzip is missing"
else
die "Unsupported COMPRESSOR=$COMPRESSOR (use zstd or gzip)"
fi
for t in tar rsync flock df find stat; do have "$t" || die "Missing required tool: $t"; done
if [[ "$COMPRESSOR" == "zstd" ]]; then have zstd || die "COMPRESSOR=zstd but zstd is missing"
elif [[ "$COMPRESSOR" == "gzip" ]]; then have gzip || die "COMPRESSOR=gzip but gzip is missing"
else die "Unsupported COMPRESSOR=$COMPRESSOR (use zstd or gzip)"; fi
if [[ "${ENABLE_DB_DUMPS}" == "true" ]]; then
have mysqldump || die "ENABLE_DB_DUMPS=true but mysqldump missing"
have mysql || die "ENABLE_DB_DUMPS=true but mysql client missing"
fi
have "$RCLONE_BIN" || die "rclone not installed (missing: $RCLONE_BIN)"
# ---------- Disk safety: cleanup + free-space checks ----------
# ---------- Disk safety ----------
cleanup_old_local_archives
ensure_min_free_space
# ---------- Staging ----------
STAGING_DIR="${STAGING_ROOT}/run_${ts}"
mkdir -p "$STAGING_DIR"/{db,files,meta}
echo "$(date -Is)" > "$STAGING_DIR/meta/created_at.txt"
echo "$(hostname -f 2>/dev/null || hostname)" > "$STAGING_DIR/meta/hostname.txt"
echo "${ts}" > "$STAGING_DIR/meta/timestamp.txt"
# ---------- Services consistency ----------
if [[ "${ENABLE_GITEA:-false}" == "true" && "${ENABLE_GITEA_SERVICE_STOP:-true}" == "true" ]]; then
if systemctl is-active --quiet "${GITEA_SERVICE_NAME}"; then
echo "-- Stopping Gitea service for consistent backup: ${GITEA_SERVICE_NAME}"
systemctl stop "${GITEA_SERVICE_NAME}"
GITEA_WAS_STOPPED=true
fi
fi
# ---------- DB Dumps ----------
if [[ "${ENABLE_DB_DUMPS}" == "true" ]]; then
echo "-- DB dumps enabled"
if [[ -n "${WP_DB_NAME:-}" ]]; then
[[ -r "${WP_DB_CNF}" ]] || die "WP_DB_CNF not readable: ${WP_DB_CNF}"
echo "-- Dump WordPress DB: ${WP_DB_NAME}"
mysqldump --defaults-extra-file="${WP_DB_CNF}" \
--single-transaction --routines --triggers --hex-blob \
"${WP_DB_NAME}" > "$STAGING_DIR/db/wordpress_${ts}.sql"
fi
dump_mysql_db() {
local cnf="$1" db="$2" out="$3"
[[ -r "$cnf" ]] || die "DB CNF not readable: $cnf"
echo "-- Dump MySQL/MariaDB DB: ${db}"
mysqldump --defaults-extra-file="$cnf" --single-transaction --routines --triggers --hex-blob "$db" > "$out"
}
[[ -n "${WP_DB_NAME:-}" ]] && dump_mysql_db "${WP_DB_CNF}" "${WP_DB_NAME}" "$STAGING_DIR/db/wordpress_${ts}.sql" || true
if [[ -n "${NC_DB_NAME:-}" ]]; then
[[ -r "${NC_DB_CNF}" ]] || die "NC_DB_CNF not readable: ${NC_DB_CNF}"
echo "-- Dump Nextcloud DB: ${NC_DB_NAME}"
if [[ "${ENABLE_NEXTCLOUD_MAINTENANCE}" == "true" ]]; then
if [[ "${ENABLE_NEXTCLOUD_MAINTENANCE:-true}" == "true" ]]; then
echo "-- Nextcloud maintenance mode ON..."
sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:mode --on
NC_MAINTENANCE_ON=true
fi
mysqldump --defaults-extra-file="${NC_DB_CNF}" \
--single-transaction --routines --triggers --hex-blob \
"${NC_DB_NAME}" > "$STAGING_DIR/db/nextcloud_${ts}.sql"
if [[ "${ENABLE_NEXTCLOUD_MAINTENANCE}" == "true" ]]; then
dump_mysql_db "${NC_DB_CNF}" "${NC_DB_NAME}" "$STAGING_DIR/db/nextcloud_${ts}.sql"
if [[ "${ENABLE_NEXTCLOUD_MAINTENANCE:-true}" == "true" ]]; then
echo "-- Nextcloud maintenance mode OFF..."
sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:mode --off
sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:mode --off || true
NC_MAINTENANCE_ON=false
fi
fi
if [[ "${ENABLE_GITEA:-false}" == "true" ]]; then
# native gitea with MariaDB
[[ -n "${GITEA_DB_NAME:-}" ]] && dump_mysql_db "${GITEA_DB_CNF}" "${GITEA_DB_NAME}" "$STAGING_DIR/db/gitea_${ts}.sql" || echo "WARN: ENABLE_GITEA=true but GITEA_DB_NAME empty - skipping gitea DB"
fi
else
echo "-- DB dumps disabled"
fi
# ---------- File Copies (rsync into staging for consistency) ----------
# ---------- File copies ----------
echo "-- Collecting files via rsync..."
rsync_dir() {
local src="$1"
local dst="$2"
shift 2 || true
[[ -d "$src" ]] || die "Source directory missing: $src"
mkdir -p "$dst"
# Remaining args are exclude patterns like "nextcloud/"
local excludes=()
while [[ $# -gt 0 ]]; do
excludes+=("--exclude=$1")
shift
done
rsync -aHAX --numeric-ids --delete --info=stats2 \
"${excludes[@]}" \
"$src"/ "$dst"/
while [[ $# -gt 0 ]]; do excludes+=("--exclude=$1"); shift; done
rsync -aHAX --numeric-ids --delete --info=stats2 "${excludes[@]}" "$src"/ "$dst"/
}
compute_wp_excludes() {
# Returns exclude patterns via stdout, one per line
if [[ "${WP_EXCLUDES_MODE}" == "manual" ]]; then
if declare -p WP_EXCLUDES >/dev/null 2>&1; then
for e in "${WP_EXCLUDES[@]}"; do
echo "$e"
done
fi
return 0
fi
# auto mode:
# If Nextcloud is enabled and sits inside WP_DIR (your setup), exclude "nextcloud/" from WP sync
if [[ "${ENABLE_NEXTCLOUD}" == "true" ]]; then
local wp="${WP_DIR%/}"
local nc="${NC_DIR%/}"
if [[ "$nc" == "$wp/nextcloud" ]]; then
echo "nextcloud/"
fi
fi
}
if [[ "${ENABLE_WORDPRESS}" == "true" ]]; then
# WordPress webroot: exclude nextcloud/ if it lives below WP_DIR
if [[ "${ENABLE_WORDPRESS:-false}" == "true" ]]; then
echo "-- WordPress files: ${WP_DIR}"
mapfile -t _wp_excludes < <(compute_wp_excludes || true)
if [[ "${#_wp_excludes[@]}" -gt 0 ]]; then
echo "-- WordPress excludes: ${_wp_excludes[*]}"
rsync_dir "${WP_DIR}" "$STAGING_DIR/files/wordpress" "${_wp_excludes[@]}"
wp_excludes=()
if [[ "${ENABLE_NEXTCLOUD:-false}" == "true" ]]; then
wp="${WP_DIR%/}"; nc="${NC_DIR%/}"
if [[ "$nc" == "$wp/nextcloud" ]]; then
wp_excludes+=("nextcloud/")
fi
fi
if [[ "${#wp_excludes[@]}" -gt 0 ]]; then
echo "-- WordPress excludes: ${wp_excludes[*]}"
rsync_dir "${WP_DIR}" "$STAGING_DIR/files/wordpress" "${wp_excludes[@]}"
else
rsync_dir "${WP_DIR}" "$STAGING_DIR/files/wordpress"
fi
fi
if [[ "${ENABLE_NEXTCLOUD}" == "true" ]]; then
echo "-- Nextcloud files: ${NC_DIR}"
rsync_dir "${NC_DIR}" "$STAGING_DIR/files/nextcloud"
# Nextcloud code: exclude data/
if [[ "${ENABLE_NEXTCLOUD:-false}" == "true" ]]; then
echo "-- Nextcloud code: ${NC_DIR} (excluding data/)"
rsync_dir "${NC_DIR}" "$STAGING_DIR/files/nextcloud" "data/"
if [[ "${ENABLE_NEXTCLOUD_DATA}" == "true" ]]; then
: "${NC_DATA_DIR:=${NC_DIR%/}/data}"
if [[ "${ENABLE_NEXTCLOUD_DATA:-true}" == "true" ]]; then
echo "-- Nextcloud data: ${NC_DATA_DIR}"
rsync_dir "${NC_DATA_DIR}" "$STAGING_DIR/files/nextcloud-data"
fi
fi
if [[ "${ENABLE_MAIL}" == "true" ]]; then
echo "-- Mail files..."
[[ -n "${MAIL_DIR:-}" && -d "${MAIL_DIR}" ]] && rsync_dir "${MAIL_DIR}" "$STAGING_DIR/files/mail" || true
[[ -n "${POSTFIX_DIR:-}" && -d "${POSTFIX_DIR}" ]] && rsync -aHAX "${POSTFIX_DIR}/" "$STAGING_DIR/files/postfix/" || true
[[ -n "${DOVECOT_DIR:-}" && -d "${DOVECOT_DIR}" ]] && rsync -aHAX "${DOVECOT_DIR}/" "$STAGING_DIR/files/dovecot/" || true
# Gitea files (based on app.ini APP_DATA_PATH)
if [[ "${ENABLE_GITEA:-false}" == "true" ]]; then
: "${GITEA_DATA_DIR:=/var/lib/gitea/data}"
echo "-- Gitea data dir: ${GITEA_DATA_DIR}"
rsync_dir "${GITEA_DATA_DIR}" "$STAGING_DIR/files/gitea-data"
: "${GITEA_ETC_DIR:=/etc/gitea}"
if [[ -n "${GITEA_ETC_DIR}" && -d "${GITEA_ETC_DIR}" ]]; then
echo "-- Gitea config dir: ${GITEA_ETC_DIR}"
rsync_dir "${GITEA_ETC_DIR}" "$STAGING_DIR/files/gitea-etc"
fi
fi
# ---------- Size summary ----------
SIZES_FILE="${STAGING_DIR}/meta/sizes.txt"
{
echo "WordPress staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/wordpress")")"
echo "Nextcloud staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/nextcloud")")"
echo "Nextcloud-data staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/nextcloud-data")")"
echo "Mail staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/mail")")"
echo "DB dumps staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/db")")"
echo "Staging total: $(human_bytes "$(bytes_of_path "$STAGING_DIR")")"
echo "DB dumps staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/db")")"
echo "WordPress staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/wordpress")")"
echo "Nextcloud code staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/nextcloud")")"
echo "Nextcloud data staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/nextcloud-data")")"
echo "Gitea data staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/gitea-data")")"
echo "Gitea etc staged: $(human_bytes "$(bytes_of_path "$STAGING_DIR/files/gitea-etc")")"
echo "Staging total: $(human_bytes "$(bytes_of_path "$STAGING_DIR")")"
} > "$SIZES_FILE" || true
# ---------- Disk safety: check again after staging ----------
ensure_min_free_space
# ---------- Archive ----------
archive_base="appbackup_${ts}"
tar_file="${ARCHIVE_DIR}/${archive_base}.tar"
# ---------- Create separate archives ----------
make_archive() {
local label="$1" src_rel="$2"
local tar_file="${ARCHIVE_DIR}/${ARCHIVE_PREFIX}_${ts}_${label}.tar"
local out_file
echo "-- Creating tar: ${tar_file}"
(
cd "$STAGING_DIR"
tar --numeric-owner --xattrs --acls -cf "$tar_file" .
)
echo "-- Creating archive (${label}): ${tar_file}"
(
cd "$STAGING_DIR"
tar --numeric-owner --xattrs --acls -cf "$tar_file" "$src_rel"
)
if [[ "$COMPRESSOR" == "zstd" ]]; then
ARCHIVE_FILE="${tar_file}.zst"
echo "-- Compressing (zstd): ${ARCHIVE_FILE}"
ionice -c "${IONICE_CLASS}" -n "${IONICE_LEVEL}" nice -n "${NICE_LEVEL}" \
zstd -T0 -19 --rm "$tar_file"
echo "-- Testing zstd integrity..."
zstd -t "$ARCHIVE_FILE"
elif [[ "$COMPRESSOR" == "gzip" ]]; then
ARCHIVE_FILE="${tar_file}.gz"
echo "-- Compressing (gzip): ${ARCHIVE_FILE}"
ionice -c "${IONICE_CLASS}" -n "${IONICE_LEVEL}" nice -n "${NICE_LEVEL}" \
gzip -9 "$tar_file"
echo "-- Testing gzip integrity..."
gzip -t "$ARCHIVE_FILE"
if [[ "$COMPRESSOR" == "zstd" ]]; then
out_file="${tar_file}.zst"
echo "-- Compressing (zstd): ${out_file}"
ionice -c "${IONICE_CLASS}" -n "${IONICE_LEVEL}" nice -n "${NICE_LEVEL}" zstd -T0 -19 --rm "$tar_file"
zstd -t "$out_file"
else
out_file="${tar_file}.gz"
echo "-- Compressing (gzip): ${out_file}"
ionice -c "${IONICE_CLASS}" -n "${IONICE_LEVEL}" nice -n "${NICE_LEVEL}" gzip -9 "$tar_file"
gzip -t "$out_file"
fi
echo "$out_file"
}
ARCHIVES=()
ARCHIVES+=("$(make_archive "meta" "meta")")
if [[ -d "$STAGING_DIR/db" && -n "$(ls -A "$STAGING_DIR/db" 2>/dev/null || true)" ]]; then
ARCHIVES+=("$(make_archive "db" "db")")
fi
[[ "${ENABLE_WORDPRESS:-false}" == "true" ]] && ARCHIVES+=("$(make_archive "wordpress" "files/wordpress")") || true
if [[ "${ENABLE_NEXTCLOUD:-false}" == "true" ]]; then
ARCHIVES+=("$(make_archive "nextcloud" "files/nextcloud")")
[[ "${ENABLE_NEXTCLOUD_DATA:-true}" == "true" ]] && ARCHIVES+=("$(make_archive "nextcloud-data" "files/nextcloud-data")") || true
fi
if [[ "${ENABLE_GITEA:-false}" == "true" ]]; then
ARCHIVES+=("$(make_archive "gitea" "files/gitea-data")")
if [[ -d "$STAGING_DIR/files/gitea-etc" && -n "$(ls -A "$STAGING_DIR/files/gitea-etc" 2>/dev/null || true)" ]]; then
ARCHIVES+=("$(make_archive "gitea-etc" "files/gitea-etc")")
fi
fi
echo "-- Archive ready: ${ARCHIVE_FILE}"
echo "-- Archive size: $(du -h "$ARCHIVE_FILE" | awk '{print $1}')"
echo "-- Archives created:"
for f in "${ARCHIVES[@]}"; do
echo " - $f ($(du -h "$f" | awk '{print $1}'))"
done
# ---------- rclone remote health-check ----------
echo "-- rclone remote check: ${RCLONE_REMOTE}"
"$RCLONE_BIN" lsf "${RCLONE_REMOTE}" --max-depth 1 >/dev/null 2>&1 || die "Remote not reachable: ${RCLONE_REMOTE}"
# restart gitea before upload
gitea_service_start
# ---------- Upload ----------
RCLONE_OUTPUT_FILE="${LOG_DIR}/rclone_${ts}.log"
echo "-- Uploading via rclone (output: ${RCLONE_OUTPUT_FILE})..."
RCLONE_STATUS="RUNNING"
# ---------- Upload via rclone ----------
if [[ "${ENABLE_UPLOAD}" == "true" ]]; then
RCLONE_OUTPUT_FILE="${LOG_DIR}/rclone_${ts}.log"
RCLONE_STATUS="RUNNING"
remote_run="${RCLONE_REMOTE_BASE}/${ARCHIVE_PREFIX}_${ts}"
RCLONE_ARGS=(
"copy" "$ARCHIVE_FILE" "${RCLONE_REMOTE}"
"--checksum"
"--retries" "${RCLONE_RETRIES}"
"--low-level-retries" "${RCLONE_LOW_LEVEL_RETRIES}"
"--retries-sleep" "${RCLONE_RETRIES_SLEEP}"
"--stats" "${RCLONE_STATS}"
"--stats-one-line"
"--log-level" "INFO"
"--transfers" "4"
"--checkers" "8"
)
echo "-- rclone remote check: ${RCLONE_REMOTE_BASE}"
"$RCLONE_BIN" lsf "${RCLONE_REMOTE_BASE}" --max-depth 1 >/dev/null 2>&1 || die "Remote not reachable: ${RCLONE_REMOTE_BASE}"
if [[ "${RCLONE_BWLIMIT}" != "0" ]]; then
RCLONE_ARGS+=("--bwlimit" "${RCLONE_BWLIMIT}")
fi
echo "-- Creating remote folder: ${remote_run}"
"$RCLONE_BIN" mkdir "${remote_run}" >/dev/null 2>&1 || true
if ionice -c "${IONICE_CLASS}" -n "${IONICE_LEVEL}" nice -n "${NICE_LEVEL}" \
"$RCLONE_BIN" "${RCLONE_ARGS[@]}" | tee -a "$RCLONE_OUTPUT_FILE"
then
common_args=(
"--checksum"
"--retries" "${RCLONE_RETRIES}"
"--low-level-retries" "${RCLONE_LOW_LEVEL_RETRIES}"
"--retries-sleep" "${RCLONE_RETRIES_SLEEP}"
"--stats" "${RCLONE_STATS}"
"--stats-one-line"
"--log-level" "INFO"
"--transfers" "${RCLONE_TRANSFERS}"
"--checkers" "${RCLONE_CHECKERS}"
"--timeout" "${RCLONE_TIMEOUT}"
"--contimeout" "${RCLONE_CONTIMEOUT}"
"--onedrive-chunk-size" "${RCLONE_ONEDRIVE_CHUNK_SIZE}"
)
[[ "${RCLONE_BWLIMIT}" != "0" ]] && common_args+=("--bwlimit" "${RCLONE_BWLIMIT}") || true
echo "-- Uploading archives to: ${remote_run} (log: ${RCLONE_OUTPUT_FILE})"
for f in "${ARCHIVES[@]}"; do
echo "-- Upload: $(basename "$f")"
if ionice -c "${IONICE_CLASS}" -n "${IONICE_LEVEL}" nice -n "${NICE_LEVEL}" "$RCLONE_BIN" copy "$f" "${remote_run}" "${common_args[@]}" | tee -a "$RCLONE_OUTPUT_FILE"
then
:
else
RCLONE_STATUS="FAIL"
die "Upload failed for $(basename "$f") (see ${RCLONE_OUTPUT_FILE})"
fi
done
RCLONE_STATUS="OK"
if [[ "${ENABLE_REMOTE_RETENTION}" == "true" ]]; then
echo "-- Remote retention: delete objects older than ${REMOTE_RETENTION_DAYS}d (best effort)"
"$RCLONE_BIN" delete "${RCLONE_REMOTE_BASE}" --min-age "${REMOTE_RETENTION_DAYS}d" --log-level INFO || true
fi
else
RCLONE_STATUS="FAIL"
die "Upload failed (see ${RCLONE_OUTPUT_FILE})"
echo "-- Upload disabled (ENABLE_UPLOAD=false)"
fi
# ---------- Remote retention ----------
if [[ "${ENABLE_REMOTE_RETENTION}" == "true" ]]; then
echo "-- Remote retention: delete older than ${REMOTE_RETENTION_DAYS}d"
"$RCLONE_BIN" delete "${RCLONE_REMOTE}" --min-age "${REMOTE_RETENTION_DAYS}d" --log-level INFO || true
fi
# ---------- Local retention (again, to enforce after new archive) ----------
cleanup_old_local_archives
echo "== app-backup done: ${ts} =="

View File

@@ -2,14 +2,18 @@
set -Eeuo pipefail
umask 027
# ---------- Logging ----------
# ==============================================================================
# app-restore.sh
# - Restore from a "run folder" (local dir or rclone remote folder)
# - Applies archives per component (meta/db/wordpress/nextcloud/nextcloud-data/gitea...)
# ==============================================================================
LOG_DIR="/var/log/app-backup"
mkdir -p "$LOG_DIR"
ts="$(date '+%Y-%m-%d_%H-%M-%S')"
LOG_FILE="${LOG_DIR}/app-restore_${ts}.log"
exec > >(tee -a "$LOG_FILE" | systemd-cat -t app-restore -p info) 2>&1
# ---------- Config ----------
CONFIG_FILE="/etc/app-backup/app-backup.conf"
if [[ -r "$CONFIG_FILE" ]]; then
# shellcheck disable=SC1090
@@ -19,33 +23,35 @@ else
exit 2
fi
# ---------- Defaults ----------
: "${WORKDIR:=/var/backups/app-backup}"
: "${RESTORE_ROOT:=${WORKDIR}/restore}"
: "${RCLONE_REMOTE:=onedrive:Sicherung}"
: "${RCLONE_BIN:=rclone}"
: "${ARCHIVE_PREFIX:=appbackup}"
: "${DRY_RUN:=false}" # true = show what would happen
: "${RESTORE_DB:=true}" # true/false
: "${RESTORE_FILES:=true}" # true/false
: "${RESTORE_STRICT_DELETE:=false}" # true = rsync --delete on restore
: "${RCLONE_BIN:=rclone}"
: "${RCLONE_REMOTE_BASE:=OneDrive:Sicherung/JRITServerBackups/$(hostname -s)}"
: "${DRY_RUN:=false}"
: "${RESTORE_DB:=true}"
: "${RESTORE_FILES:=true}"
: "${RESTORE_STRICT_DELETE:=false}"
: "${ENABLE_NEXTCLOUD_MAINTENANCE:=true}"
: "${NC_OCC_USER:=apache}"
: "${NC_FILES_SCAN_AFTER_RESTORE:=false}"
: "${ENABLE_GITEA_SERVICE_STOP:=true}"
: "${GITEA_SERVICE_NAME:=gitea}"
: "${ENABLE_HTTPD_STOP:=false}"
: "${HTTPD_SERVICE_NAME:=httpd}"
: "${ENABLE_PHPFPM_STOP:=false}"
: "${PHPFPM_SERVICE_NAME:=php-fpm}"
die() { echo "ERROR: $*"; exit 1; }
have() { command -v "$1" >/dev/null 2>&1; }
run_cmd() {
if [[ "${DRY_RUN}" == "true" ]]; then
echo "[DRY_RUN] $*"
else
"$@"
fi
}
run_cmd() { [[ "${DRY_RUN}" == "true" ]] && echo "[DRY_RUN] $*" || "$@"; }
# Nextcloud maintenance-mode safety trap
NC_MAINTENANCE_ON=false
nc_maintenance_off() {
if [[ "${NC_MAINTENANCE_ON}" == "true" ]]; then
@@ -55,215 +61,176 @@ nc_maintenance_off() {
fi
}
on_exit() {
local exit_code=$?
nc_maintenance_off
exit "${exit_code}"
}
GITEA_WAS_STOPPED=false
HTTPD_WAS_STOPPED=false
PHPFPM_WAS_STOPPED=false
gitea_start() { [[ "${GITEA_WAS_STOPPED}" == "true" ]] && { echo "-- Starting gitea (trap)"; run_cmd systemctl start "${GITEA_SERVICE_NAME}" || true; GITEA_WAS_STOPPED=false; }; }
httpd_start() { [[ "${HTTPD_WAS_STOPPED}" == "true" ]] && { echo "-- Starting httpd (trap)"; run_cmd systemctl start "${HTTPD_SERVICE_NAME}" || true; HTTPD_WAS_STOPPED=false; }; }
phpfpm_start(){ [[ "${PHPFPM_WAS_STOPPED}" == "true" ]] && { echo "-- Starting php-fpm (trap)"; run_cmd systemctl start "${PHPFPM_SERVICE_NAME}" || true; PHPFPM_WAS_STOPPED=false; }; }
on_exit() { local ec=$?; nc_maintenance_off; gitea_start; httpd_start; phpfpm_start; exit "${ec}"; }
trap on_exit EXIT
# ---------- Preconditions ----------
[[ $EUID -eq 0 ]] || die "Must run as root."
for t in tar rsync flock df find stat; do
have "$t" || die "Missing required tool: $t"
done
for t in tar rsync flock df find stat; do have "$t" || die "Missing required tool: $t"; done
mkdir -p "$WORKDIR" "$RESTORE_ROOT" "$LOG_DIR"
# ---------- Locking ----------
LOCKFILE="/run/app-backup.lock"
exec 9>"$LOCKFILE"
if ! flock -n 9; then
die "Another backup/restore already running (lock: $LOCKFILE)"
fi
flock -n 9 || die "Another backup/restore already running (lock: $LOCKFILE)"
# ---------- Input ----------
# Usage:
# app-restore.sh /path/to/appbackup_YYYY-mm-dd_HH-MM-SS.tar.zst
# or
# app-restore.sh /path/to/appbackup_YYYY-mm-dd_HH-MM-SS.tar.gz
# or
# app-restore.sh --remote appbackup_YYYY-mm-dd_HH-MM-SS.tar.zst
# (copies from RCLONE_REMOTE to RESTORE_ROOT first)
ARCHIVE_PATH=""
REMOTE_NAME=""
usage() {
cat <<EOF
Usage:
$0 --remote-run <run_folder_name> # e.g. ${ARCHIVE_PREFIX}_2026-02-11_02-31-28
$0 --local-run <path_to_run_dir> # directory containing archives
Options:
--dry-run
--no-db
--no-files
EOF
}
if [[ "${1:-}" == "--remote" ]]; then
REMOTE_NAME="${2:-}"
[[ -n "$REMOTE_NAME" ]] || die "Usage: $0 --remote <archive_filename>"
have "$RCLONE_BIN" || die "rclone missing but --remote used"
REMOTE_RUN=""
LOCAL_RUN=""
while [[ $# -gt 0 ]]; do
case "$1" in
--remote-run) REMOTE_RUN="${2:-}"; shift 2;;
--local-run) LOCAL_RUN="${2:-}"; shift 2;;
--dry-run) DRY_RUN=true; shift;;
--no-db) RESTORE_DB=false; shift;;
--no-files) RESTORE_FILES=false; shift;;
-h|--help) usage; exit 0;;
*) die "Unknown arg: $1";;
esac
done
[[ -z "${REMOTE_RUN}" && -z "${LOCAL_RUN}" ]] && { usage; exit 2; }
RUN_DIR="${RESTORE_ROOT}/run_${ts}"
DOWNLOAD_DIR="${RUN_DIR}/downloads"
EXTRACT_DIR="${RUN_DIR}/extract"
mkdir -p "$DOWNLOAD_DIR" "$EXTRACT_DIR"
if [[ -n "${REMOTE_RUN}" ]]; then
have "$RCLONE_BIN" || die "rclone missing but --remote-run used"
remote_path="${RCLONE_REMOTE_BASE}/${REMOTE_RUN}"
echo "-- Fetching archives from remote: ${remote_path} -> ${DOWNLOAD_DIR}"
run_cmd "$RCLONE_BIN" copy "${remote_path}" "${DOWNLOAD_DIR}" --checksum --log-level INFO
SRC_DIR="${DOWNLOAD_DIR}"
else
ARCHIVE_PATH="${1:-}"
[[ -n "$ARCHIVE_PATH" ]] || die "Usage: $0 <archive_file.tar.zst|tar.gz> OR $0 --remote <archive_filename>"
[[ -d "${LOCAL_RUN}" ]] || die "Local run dir not found: ${LOCAL_RUN}"
SRC_DIR="${LOCAL_RUN}"
fi
echo "== app-restore start: ${ts} =="
echo "-- Config: ${CONFIG_FILE}"
echo "-- Log: ${LOG_FILE}"
echo "-- Source dir: ${SRC_DIR}"
echo "-- DRY_RUN: ${DRY_RUN}"
echo "-- RESTORE_FILES: ${RESTORE_FILES}"
echo "-- RESTORE_DB: ${RESTORE_DB}"
echo "-- STRICT_DELETE: ${RESTORE_STRICT_DELETE}"
# ---------- Fetch from remote if requested ----------
if [[ -n "$REMOTE_NAME" ]]; then
ARCHIVE_PATH="${RESTORE_ROOT}/${REMOTE_NAME}"
echo "-- Fetching from remote: ${RCLONE_REMOTE}/${REMOTE_NAME} -> ${ARCHIVE_PATH}"
run_cmd "$RCLONE_BIN" copy "${RCLONE_REMOTE}/${REMOTE_NAME}" "${RESTORE_ROOT}" --checksum --log-level INFO
detect_tar_flags() { case "$1" in *.tar.zst) echo "--zstd" ;; *.tar.gz) echo "-z" ;; *) die "Unsupported archive: $1" ;; esac; }
extract_archive() {
local f="$1" flags; flags="$(detect_tar_flags "$f")"
echo "-- Extract: $(basename "$f") -> ${EXTRACT_DIR}"
[[ "${DRY_RUN}" == "true" ]] && echo "[DRY_RUN] tar ${flags} -xf $f -C ${EXTRACT_DIR}" || tar ${flags} -xf "$f" -C "$EXTRACT_DIR"
}
pick_one() { ls -1 "${SRC_DIR}"/$1 2>/dev/null | sort | tail -n 1 || true; }
# stop services (optional)
if [[ "${ENABLE_HTTPD_STOP}" == "true" ]] && systemctl is-active --quiet "${HTTPD_SERVICE_NAME}"; then
echo "-- Stopping httpd for restore: ${HTTPD_SERVICE_NAME}"
run_cmd systemctl stop "${HTTPD_SERVICE_NAME}"; HTTPD_WAS_STOPPED=true
fi
if [[ "${ENABLE_PHPFPM_STOP}" == "true" ]] && systemctl is-active --quiet "${PHPFPM_SERVICE_NAME}"; then
echo "-- Stopping php-fpm for restore: ${PHPFPM_SERVICE_NAME}"
run_cmd systemctl stop "${PHPFPM_SERVICE_NAME}"; PHPFPM_WAS_STOPPED=true
fi
if [[ "${ENABLE_GITEA:-false}" == "true" && "${ENABLE_GITEA_SERVICE_STOP}" == "true" ]] && systemctl is-active --quiet "${GITEA_SERVICE_NAME}"; then
echo "-- Stopping gitea for restore: ${GITEA_SERVICE_NAME}"
run_cmd systemctl stop "${GITEA_SERVICE_NAME}"; GITEA_WAS_STOPPED=true
fi
[[ -f "$ARCHIVE_PATH" ]] || die "Archive not found: $ARCHIVE_PATH"
# ---------- Detect compression ----------
ARCHIVE_BASENAME="$(basename "$ARCHIVE_PATH")"
IS_ZSTD=false
IS_GZIP=false
case "$ARCHIVE_BASENAME" in
*.tar.zst) IS_ZSTD=true ;;
*.tar.gz) IS_GZIP=true ;;
*)
# fallback: try file(1)
if have file; then
ftype="$(file -b "$ARCHIVE_PATH" || true)"
if echo "$ftype" | grep -qi zstd; then
IS_ZSTD=true
elif echo "$ftype" | grep -qi gzip; then
IS_GZIP=true
else
die "Cannot detect archive compression for: $ARCHIVE_PATH"
fi
else
die "Unknown archive extension and file(1) not available: $ARCHIVE_PATH"
fi
;;
esac
if [[ "$IS_ZSTD" == "true" ]]; then
have zstd || die "zstd archive but zstd missing"
elif [[ "$IS_GZIP" == "true" ]]; then
have gzip || die "gzip archive but gzip missing"
# nextcloud maintenance
if [[ "${ENABLE_NEXTCLOUD:-false}" == "true" && "${ENABLE_NEXTCLOUD_MAINTENANCE}" == "true" ]] && [[ -d "${NC_DIR}" && -f "${NC_DIR}/occ" ]]; then
echo "-- Nextcloud maintenance mode ON..."
run_cmd sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:mode --on
NC_MAINTENANCE_ON=true
fi
# ---------- Extract ----------
RUN_DIR="${RESTORE_ROOT}/run_${ts}"
STAGING_DIR="${RUN_DIR}/staging"
mkdir -p "$STAGING_DIR"
# extract archives
meta_arc="$(pick_one "${ARCHIVE_PREFIX}_*_meta.tar.*")"; [[ -n "$meta_arc" ]] && extract_archive "$meta_arc" || true
db_arc="$(pick_one "${ARCHIVE_PREFIX}_*_db.tar.*")"
wp_arc="$(pick_one "${ARCHIVE_PREFIX}_*_wordpress.tar.*")"
nc_arc="$(pick_one "${ARCHIVE_PREFIX}_*_nextcloud.tar.*")"
ncd_arc="$(pick_one "${ARCHIVE_PREFIX}_*_nextcloud-data.tar.*")"
g_arc="$(pick_one "${ARCHIVE_PREFIX}_*_gitea.tar.*")"
g_etc_arc="$(pick_one "${ARCHIVE_PREFIX}_*_gitea-etc.tar.*")"
echo "-- Extracting archive to: ${STAGING_DIR}"
if [[ "$IS_ZSTD" == "true" ]]; then
run_cmd tar --zstd -xf "$ARCHIVE_PATH" -C "$STAGING_DIR"
elif [[ "$IS_GZIP" == "true" ]]; then
run_cmd tar -xzf "$ARCHIVE_PATH" -C "$STAGING_DIR"
fi
[[ -n "$db_arc" ]] && extract_archive "$db_arc" || true
[[ -n "$wp_arc" && "${RESTORE_FILES}" == "true" ]] && extract_archive "$wp_arc" || true
[[ -n "$nc_arc" && "${RESTORE_FILES}" == "true" ]] && extract_archive "$nc_arc" || true
[[ -n "$ncd_arc" && "${RESTORE_FILES}" == "true" ]] && extract_archive "$ncd_arc" || true
[[ -n "$g_arc" && "${RESTORE_FILES}" == "true" ]] && extract_archive "$g_arc" || true
[[ -n "$g_etc_arc" && "${RESTORE_FILES}" == "true" ]] && extract_archive "$g_etc_arc" || true
[[ -d "$STAGING_DIR/files" ]] || die "Invalid archive content: missing files/ in extracted staging"
# ---------- Maintenance mode (Nextcloud) ----------
if [[ "${ENABLE_NEXTCLOUD}" == "true" && "${ENABLE_NEXTCLOUD_MAINTENANCE}" == "true" ]]; then
if [[ -d "${NC_DIR}" && -f "${NC_DIR}/occ" ]]; then
echo "-- Nextcloud maintenance mode ON..."
run_cmd sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:mode --on
NC_MAINTENANCE_ON=true
else
echo "WARN: Nextcloud directory/occ not found at NC_DIR=${NC_DIR} - cannot toggle maintenance mode"
fi
fi
# ---------- Restore files ----------
rsync_restore_dir() {
local src="$1"
local dst="$2"
local src="$1" dst="$2"
[[ -d "$src" ]] || die "Restore source missing: $src"
mkdir -p "$dst"
local delete_flag=()
if [[ "${RESTORE_STRICT_DELETE}" == "true" ]]; then
delete_flag=(--delete)
fi
run_cmd rsync -aHAX --numeric-ids --info=stats2 \
"${delete_flag[@]}" \
"$src"/ "$dst"/
local del=(); [[ "${RESTORE_STRICT_DELETE}" == "true" ]] && del=(--delete)
run_cmd rsync -aHAX --numeric-ids --info=stats2 "${del[@]}" "$src"/ "$dst"/
}
if [[ "${RESTORE_FILES}" == "true" ]]; then
echo "-- Restoring files..."
if [[ "${ENABLE_WORDPRESS}" == "true" ]]; then
echo "-- Restore WordPress (webroot) to: ${WP_DIR}"
# Backup excluded nextcloud/ automatically, so this should not overwrite Nextcloud.
rsync_restore_dir "$STAGING_DIR/files/wordpress" "${WP_DIR}"
if [[ -d "${EXTRACT_DIR}/files/wordpress" && "${ENABLE_WORDPRESS:-false}" == "true" ]]; then
echo "-- WordPress -> ${WP_DIR}"
rsync_restore_dir "${EXTRACT_DIR}/files/wordpress" "${WP_DIR}"
fi
if [[ "${ENABLE_NEXTCLOUD}" == "true" ]]; then
echo "-- Restore Nextcloud code to: ${NC_DIR}"
rsync_restore_dir "$STAGING_DIR/files/nextcloud" "${NC_DIR}"
if [[ "${ENABLE_NEXTCLOUD_DATA}" == "true" ]]; then
echo "-- Restore Nextcloud data to: ${NC_DATA_DIR}"
rsync_restore_dir "$STAGING_DIR/files/nextcloud-data" "${NC_DATA_DIR}"
fi
if [[ "${ENABLE_NEXTCLOUD:-false}" == "true" ]]; then
[[ -d "${EXTRACT_DIR}/files/nextcloud" ]] && { echo "-- Nextcloud code -> ${NC_DIR}"; rsync_restore_dir "${EXTRACT_DIR}/files/nextcloud" "${NC_DIR}"; }
: "${NC_DATA_DIR:=${NC_DIR%/}/data}"
[[ -d "${EXTRACT_DIR}/files/nextcloud-data" && "${ENABLE_NEXTCLOUD_DATA:-true}" == "true" ]] && { echo "-- Nextcloud data -> ${NC_DATA_DIR}"; rsync_restore_dir "${EXTRACT_DIR}/files/nextcloud-data" "${NC_DATA_DIR}"; }
fi
if [[ "${ENABLE_MAIL}" == "true" ]]; then
echo "-- Restore mail files..."
[[ -d "$STAGING_DIR/files/mail" && -n "${MAIL_DIR:-}" ]] && rsync_restore_dir "$STAGING_DIR/files/mail" "${MAIL_DIR}" || true
[[ -d "$STAGING_DIR/files/postfix" && -n "${POSTFIX_DIR:-}" ]] && rsync_restore_dir "$STAGING_DIR/files/postfix" "${POSTFIX_DIR}" || true
[[ -d "$STAGING_DIR/files/dovecot" && -n "${DOVECOT_DIR:-}" ]] && rsync_restore_dir "$STAGING_DIR/files/dovecot" "${DOVECOT_DIR}" || true
if [[ "${ENABLE_GITEA:-false}" == "true" ]]; then
: "${GITEA_DATA_DIR:=/var/lib/gitea/data}"
[[ -d "${EXTRACT_DIR}/files/gitea-data" ]] && { echo "-- Gitea data -> ${GITEA_DATA_DIR}"; rsync_restore_dir "${EXTRACT_DIR}/files/gitea-data" "${GITEA_DATA_DIR}"; }
: "${GITEA_ETC_DIR:=/etc/gitea}"
[[ -d "${EXTRACT_DIR}/files/gitea-etc" && -n "${GITEA_ETC_DIR:-}" ]] && { echo "-- Gitea etc -> ${GITEA_ETC_DIR}"; rsync_restore_dir "${EXTRACT_DIR}/files/gitea-etc" "${GITEA_ETC_DIR}"; }
fi
else
echo "-- RESTORE_FILES=false (skipping file restore)"
echo "-- RESTORE_FILES=false (skipping)"
fi
# ---------- Restore databases ----------
mysql_restore_sql() {
local cnf="$1"
local db="$2"
local sql_file="$3"
local cnf="$1" db="$2" sql="$3"
[[ -r "$cnf" ]] || die "DB CNF not readable: $cnf"
[[ -r "$sql_file" ]] || die "SQL file not readable: $sql_file"
echo "-- Import DB: ${db} from ${sql_file}"
run_cmd mysql --defaults-extra-file="$cnf" "$db" < "$sql_file"
[[ -r "$sql" ]] || die "SQL not readable: $sql"
have mysql || die "mysql client missing"
echo "-- Import MySQL/MariaDB DB: ${db} from $(basename "$sql")"
run_cmd mysql --defaults-extra-file="$cnf" "$db" < "$sql"
}
if [[ "${RESTORE_DB}" == "true" ]]; then
if [[ "${RESTORE_DB}" == "true" && -d "${EXTRACT_DIR}/db" ]]; then
echo "-- Restoring databases..."
wp_sql="$(ls -1 "${EXTRACT_DIR}/db"/wordpress_*.sql 2>/dev/null | sort | tail -n 1 || true)"
nc_sql="$(ls -1 "${EXTRACT_DIR}/db"/nextcloud_*.sql 2>/dev/null | sort | tail -n 1 || true)"
g_sql="$(ls -1 "${EXTRACT_DIR}/db"/gitea_*.sql 2>/dev/null | sort | tail -n 1 || true)"
if [[ -n "${WP_DB_NAME:-}" ]]; then
wp_sql="$(ls -1 "$STAGING_DIR/db"/wordpress_*.sql 2>/dev/null | tail -n 1 || true)"
if [[ -n "$wp_sql" ]]; then
mysql_restore_sql "${WP_DB_CNF}" "${WP_DB_NAME}" "$wp_sql"
else
echo "WARN: No WordPress SQL dump found in archive."
fi
fi
if [[ -n "${NC_DB_NAME:-}" ]]; then
nc_sql="$(ls -1 "$STAGING_DIR/db"/nextcloud_*.sql 2>/dev/null | tail -n 1 || true)"
if [[ -n "$nc_sql" ]]; then
mysql_restore_sql "${NC_DB_CNF}" "${NC_DB_NAME}" "$nc_sql"
else
echo "WARN: No Nextcloud SQL dump found in archive."
fi
fi
[[ -n "${WP_DB_NAME:-}" && -n "$wp_sql" ]] && mysql_restore_sql "${WP_DB_CNF}" "${WP_DB_NAME}" "$wp_sql" || echo "WARN: WP DB dump missing"
[[ -n "${NC_DB_NAME:-}" && -n "$nc_sql" ]] && mysql_restore_sql "${NC_DB_CNF}" "${NC_DB_NAME}" "$nc_sql" || echo "WARN: NC DB dump missing"
[[ "${ENABLE_GITEA:-false}" == "true" && -n "${GITEA_DB_NAME:-}" && -n "$g_sql" ]] && mysql_restore_sql "${GITEA_DB_CNF}" "${GITEA_DB_NAME}" "$g_sql" || true
else
echo "-- RESTORE_DB=false (skipping DB restore)"
echo "-- RESTORE_DB=false or no db dump present (skipping)"
fi
# ---------- Post-restore Nextcloud steps ----------
if [[ "${ENABLE_NEXTCLOUD}" == "true" && -d "${NC_DIR}" && -f "${NC_DIR}/occ" ]]; then
if [[ "${ENABLE_NEXTCLOUD:-false}" == "true" && -d "${NC_DIR}" && -f "${NC_DIR}/occ" ]]; then
echo "-- Nextcloud post-restore: maintenance:repair"
run_cmd sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" maintenance:repair || true
echo "-- Nextcloud post-restore: files:scan (optional, can be slow)"
if [[ "${NC_FILES_SCAN_AFTER_RESTORE}" == "true" ]]; then
echo "-- Nextcloud post-restore: files:scan --all"
run_cmd sudo -u "${NC_OCC_USER}" php "${NC_DIR}/occ" files:scan --all || true
else
echo "-- Skipping files:scan (set NC_FILES_SCAN_AFTER_RESTORE=true to enable)"
fi
if [[ "${ENABLE_NEXTCLOUD_MAINTENANCE}" == "true" ]]; then
@@ -273,6 +240,10 @@ if [[ "${ENABLE_NEXTCLOUD}" == "true" && -d "${NC_DIR}" && -f "${NC_DIR}/occ" ]]
fi
fi
gitea_start
phpfpm_start
httpd_start
echo "== app-restore done: ${ts} =="
echo "-- Extracted staging kept at: ${STAGING_DIR}"
echo "-- Working dir: ${RUN_DIR}"
echo "-- Log: ${LOG_FILE}"