2024 - 迁移博客到免费服务

2024 年 4 月 2 日 星期二(已编辑)
/ ,
185
这篇文章上次修改于 2024 年 5 月 15 日 星期三,可能部分内容已经不适用,如有疑问可询问作者。

2024 - 迁移博客到免费服务

域名用免费的pp.ua

1. Koyeb 上挂探针

这里使用开源项目:https://github.com/fscarmen2/Argo-Nezha-Service-Container 的做法,具体参考教程

ps:: 哪吒虽然好用,但是担心存在安全问题(agent可以用--skip-conn --skip-procs --disable-auto-update --disable-command-execute)

2. Blog前端放到Vercel

使用官方教程:https://mx-space.js.org/themes/shiro

使用ASW反代

用Caddy

version: '3.7'
services:
  caddy:
    image: caddy:2-alpine
    container_name: caddy_server
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./caddy_data:/data
      - ./caddy_config:/config
      - ./caddy_log:/var/log/caddy
      - ./Caddyfile:/etc/caddy/Caddyfile
    restart: unless-stopped
## Caddyfile
yatu.org {
  reverse_proxy https://shiro-beryl-eta.vercel.app {
    header_up Host {http.reverse_proxy.upstream.hostport}
    header_down Access-Control-Allow-Origin *
  }
  log {
    output file /var/log/caddy/access.log
  }
}

www.yatu.org {
    redir https://yatu.org{uri}
}

3. 迁移Blog后端到🐢龟壳

DD系统参考:https://www.nodeseek.com/post-86852-1

直接用docker-compose部署了

构建镜像

先Fork一下后端项目

只要push镜像的workflows

## /core/.github/workflows/push.yaml

name: Export (core) to Dockerhub

on:
  push:
    branches:
      - master

jobs:
  docker:
    name: Docker Release
    runs-on: ${{ matrix.os }}
    strategy:
      matrix:
        os: [ubuntu-latest]
        arch: ["amd64", "arm64"]
    steps:
      - name: Checkout
        uses: actions/checkout@v4
      - name: Docker meta
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: |
            avacooper/core
          tags: |
            type=ref,event=branch
            type=semver,pattern={{version}}
            type=semver,pattern={{major}}.{{minor}}
            type=semver,pattern={{major}}
            type=sha
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
      - name: Copy .env
        run: |
          cp .env.example .env
      - name: Build and export to Docker
        uses: docker/build-push-action@v5
        with:
          context: .
          load: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }},avacooper/core:latest
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
      - name: Test
        run: |
          bash ./scripts/workflow/test-docker.sh
          sudo rm -rf ./data
          
      - name: Login to DockerHub
        uses: docker/login-action@v3
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

      - name: Build and push
        id: docker_build
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}

在Dockerhub里得到我们自己的镜像就可以了。

编写运行配置,以及自动备份的docker-compose.yaml

备份脚本,备份数据、当前文件按照日期生成文件,并且提交到github。

#!/bin/sh

. /data/.env

backup_file="/backup/backup_data_$(date +%Y%m%d%H%M%S)"
tar -czvf "$backup_file.tar.gz" -C /data .

# 加密
openssl enc -aes-256-cbc -salt -in "$backup_file.tar.gz" -out "$backup_file.enc.tar.gz" -pass pass:yourpass

# 删除未加密的备份文件
rm "$backup_file.tar.gz"

# 保留最新的两个加密备份文件
ls -1 /backup/backup_data_*.tar.gz | head -n -2 | xargs rm -f

cd /backup

# 清理,避免.git越来越大
if [ -d "./.git" ]; then
  git gc
fi

if ! git lfs 2>&1 > /dev/null; then
  echo "Git LFS not installed. Please ensure it is installed and try again."
  exit 1
fi

if [ ! -d "./.git" ]; then
  git init
  git lfs install
  git config user.name "$GIT_USERNAME"
  git config user.email "$GIT_EMAIL"
  git remote add origin "$GIT_REPO"

  # Github限制100m文件,文件用lfs传
  git config http.postBuffer 524288000
  git lfs track "*.tar.gz"
  echo "*.tar.gz filter=lfs diff=lfs merge=lfs -text" >> .gitattributes
fi

git branch --show-current | grep -q '^main$' || git branch -m master main
git fetch --depth 1
git branch -u origin/main main 2>/dev/null || git push --set-upstream origin main
# git reset --hard origin/main
echo "$(date)" > README.md
git add -A
git commit -m "Automated backup $(date)"
git push --force -u origin main

备份脚本需要申请一个Token,$GIT_REPO里加上,确保有权限读写该项目

做一个Dockerfile,在容器里执行定时任务

FROM alpine:latest

RUN apk update && \
    apk add --no-cache git bash dcron curl openssl && \
    apk add git-lfs && \
    git lfs install

RUN echo "*/45 * * * * /data/backup.sh >> /var/log/cron.log 2>&1" > /etc/crontabs/root

USER root
CMD ["crond", "-f"]

将一些文件链接到data里,以方便备份还原

ln .env ./data/.env
ln backup.sh ./data/backup.sh
ln docker-compose.yaml ./data/docker-compose.yaml
ln Dockerfile.backup ./data/Dockerfile.backup

Git LFS Data 可以存1G,存两个版本,数据在500m以内就可以用

组合后的Docker-compose.yaml

## 组合一下完成
version: '3.8'

services:
  caddy:
    image: caddy:2-alpine
    container_name: caddy_server
    ports:
      - "80:80"
      - "443:443"
    networks:
      - app-network
    volumes:
      - ./data/caddy_data:/data
      - ./data/caddy_config:/config
      - ./data/caddy_log:/var/log/caddy
      - ./data/Caddyfile:/etc/caddy/Caddyfile
    restart: unless-stopped
  app:
    container_name: mx-server
    image: avacooper/core:master
    command: bash ./docker-run.sh
    environment:
      - TZ=US/Eastern
      - NODE_ENV=production
      - ALLOWED_ORIGINS
      - JWT_SECRET
      - ENCRYPT_KEY
      - ENCRYPT_ENABLE
      - FORCE_CACHE_HEADER
      - CDN_CACHE_HEADER

    volumes:
      - ./data/mx-space:/root/.mx-space
    ports:
      - '2333:2333'
    depends_on:
      - mongo
      - redis
    links:
      - mongo
      - redis
    networks:
      - app-network
    restart: always
    healthcheck:
      test: ['CMD', 'curl', '-f', 'http://127.0.0.1:2333/api/v2/ping']
      interval: 1m30s
      timeout: 30s
      retries: 5
      start_period: 30s

  mongo:
    container_name: mongo
    image: mongo
    volumes:
      - ./data/db:/data/db
    networks:
      - app-network
    restart: always
  redis:
    image: redis
    container_name: redis

    networks:
      - app-network
    restart: always

  easyimage:
    image: ddsderek/easyimage:latest
    container_name: easyimage
    networks:
      - app-network
    ports:
      - '8080:80'
    environment:
      - TZ=US/Eastern
      - PUID=1000
      - PGID=1000
      - DEBUG=false
    volumes:
      - './data/easyimage/config:/app/web/config'
      - './data/easyimage/i:/app/web/i'
    restart: unless-stopped

  backup:
    init: true # https://github.com/dubiousjim/dcron/issues/13#issuecomment-1406937781
    build:
      context: .
      dockerfile: Dockerfile.backup
    depends_on:
      - app
    volumes:
      - ./data:/data
      - ./backup:/backup
    # command: sh -c "apk add --no-cache mongodb-tools redis && /backup/backup.sh"
    # command: sh -c "/backup/backup.sh"

networks:
  app-network:
    driver: bridge

运行后的目录如下:

./blog/
├── backup
│   ├── backup_data_20240407133216.enc.tar.gz
│   ├── .git
│   ├── .gitattributes
│   └── README.md
├── backup.sh
├── data
│   ├── backup.sh
│   ├── caddy_config
│   ├── caddy_data
│   ├── Caddyfile
│   ├── caddy_log
│   ├── db
│   ├── docker-compose.yaml
│   ├── Dockerfile.backup
│   ├── easyimage
│   ├── .env
│   ├── mx-space
│   ├── redis
│   └── uptime
├── docker-compose.yaml
├── Dockerfile.backup
└── .env

2024-4-8更新::

🤡🤡🤡了,Git LFS Data 简直是依托,能不要用就不要用,一会就满了,清理也没用。

// //

刚好Scaleway 上还有1G储存没有用,拿来挂alist+webdav当备份了

version: '3.7'
services:
  caddy:
    image: caddy:2-alpine
    container_name: caddy_server
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./caddy_data:/data
      - ./caddy_config:/config
      - ./caddy_log:/var/log/caddy
      - ./Caddyfile:/etc/caddy/Caddyfile
    restart: unless-stopped
    networks:
      - app-network

  alist:
    image: 'xhofe/alist:latest'
    container_name: alist
    volumes:
      - './alist:/opt/alist/data'
    ports:
      - '5244:5244'
    environment:
      - PUID=0
      - PGID=0
      - UMASK=022
    restart: unless-stopped
    networks:
      - app-network
      
networks:
  app-network:
    driver: bridge

配置一下alist的webdav,然后

sudo apt-get update
sudo apt-get install davfs2
mkdir /mnt/webdav
mount -t davfs -o noexec http://your-webdav-url /mnt/webdav

我这里使用的tailscale网络,因为Scaleway小鸡没有IPV4,而套CF限制又100M上传

对应docker-compose,把原先的backup目标换成/mnt/webdav挂载进去,然后修改backup.sh文件

. /data/.env

backup_file="/root/backup_data_$(date +%Y%m%d%H%M%S)"
tar -czvf "$backup_file.tar.gz" -C /data .

# 加密
openssl enc -aes-256-cbc -salt -in "$backup_file.tar.gz" -out "$backup_file.enc.tar.gz" -pass pass:ff@@123456

# 删除未加密的备份文件
rm "$backup_file.tar.gz"

# 拷贝最新备份到webdav
cp "$backup_file.enc.tar.gz" /backup

# 保留最新的两个加密备份文件
ls -1 /backup/backup_data_*.enc.tar.gz | sort -r | tail -n +3 | xargs rm -f

# 删除缓存

rm -rf /root/*
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...