2023年10月23日月曜日

ホームサーバーの環境移行(6)

外部からのアクセス設定

人生何があるかわからない。
素敵なことも、望まないことも、同時にやってきた。情緒どうすんのさ。

設定

  • 外部用のIPv4ルーターにて内部IP静的付与
  • メトリック設定
  • ポートフォアード設定
  • mydns.jpの更新設定
  • nextcloud側の設定

外部用のIPv4ルーターにて内部IP静的付与

特に備忘録なし

メトリック設定

https://pcvogel.sarakura.net/2021/01/15/32107

$ adeno@blackcore:~$ route -n
カーネルIP経路テーブル
受信先サイト    ゲートウェイ    ネットマスク   フラグ Metric Ref 使用数 インタフェース
0.0.0.0         192.168.1.1     0.0.0.0         UG    102    0        0 enp1s0
0.0.0.0         192.168.2.1     0.0.0.0         UG    103    0        0 enp4s0
192.168.1.0     0.0.0.0         255.255.255.0   U     102    0        0 enp1s0
192.168.2.0     0.0.0.0         255.255.255.0   U     103    0        0 enp4s0

enp4s0が外部用
enp1s0が内部用

デフォルトゲートウェイの変更
192.168.2.0/24をデフォルトになるように変更したい。

このあたりを思い出して
https://continue-to-challenge.blogspot.com/2019/06/ipoe.html

adeno@blackcore:~$ nmcli 
enp4s0: 接続済み から 有線接続 1
        "Realtek RTL8111/8168/8411"
        ethernet (r8169), A8:A1:59:**:**:**, hw, mtu 1500
        ip4 デフォルト
        inet4 192.168.2.21/24
        route4 192.168.2.0/24 metric 101
        route4 default via 192.168.2.1 metric 101

enp1s0: 接続済み から 有線接続 2.5G
        "Realtek RTL8125 2.5GbE"
        ethernet (r8169), 88:C9:B3:**:**:**, hw, mtu 1500
        ip6 デフォルト
        inet4 192.168.1.21/24
        route4 192.168.1.0/24 metric 102
        route4 169.254.0.0/16 metric 1000
        route4 default via 192.168.1.1 metric 102

wlp5s0: 切断済み
        "Intel Wireless-AC 3168NGW"
        wifi (iwlwifi), F0:57:A6:0E:53:99, hw, mtu 1500

adeno@blackcore:~$ sudo nmcli connection modify "有線接続 1" ipv4.never-default no
adeno@blackcore:~$ sudo nmcli connection modify "有線接続 1" ipv4.ignore-auto-routes no
adeno@blackcore:~$ sudo nmcli connection modify "有線接続 2.5G" ipv4.never-default yes
adeno@blackcore:~$ sudo nmcli connection modify "有線接続 2.5G" ipv4.ignore-auto-routes yes
adeno@blackcore:~$ sudo nmcli con up 有線接続\ 1
adeno@blackcore:~$ sudo nmcli con up 有線接続\ 2.5G 

ポートフォアード設定

外部22ポートを内部のサーバーIPの22へ転送

などなど

やっぱりいろいろ忘れるね。。。

ポート制限

サービス名 ポート
certbot 80/tcp,443/tcp
nextcloud 20443/tcp
ssh 22/tcp
samba

mydns.jpの更新設定

定期的に実行するのと、グローバルIPが変わったタイミングで実行したい。

定期実行

前回覚えたsystemd.timerで定期的に実行する

$ cat mydns_update.sh 
#!/bin/bash
#****.mydns.jp
wget -O - --http-user=****** --http-password=***** https://ipv4.mydns.jp/login.html

$ sudo cat /etc/systemd/system/mydns.renew.service 
[Unit]
Description=MyDNS.jp Renew
RefuseManualStart=no
RefuseManualStop=yes

[Service]
Type=oneshot
ExecStart=/home/adeno/BlackCoreEnv/mydns_update.sh
$ sudo cat /etc/systemd/system/mydns.renew.timer 
[Unit]
Description=MyDNS.jp Renew

[Timer]
OnBootSec=5min
OnUnitActiveSec=1d

[Install]
WantedBy=timers.target

グローバルIPの変更時に実行

グローバルIPを調べるサービスglobalip.meを使わせてもらう

$ cat chk_gip.sh 
#!/bin/bash

#------------------------------------------------------
workpath=/home/adeno/BlackCoreEnv
logname=chkgip.log
mydns_update=$workpath/mydns_update.sh

#------------------------------------------------------
oldip_path=$workpath/gip_old.txt
log_path=$workpath/$logname

echo "Glocal IP 更新チェック" | tee $log_path
date | tee -a $log_path

touch $oldip_path
oldip=`cat $oldip_path`

newip=`curl -s globalip.me | sed '1!d'`


if [ "$newip" != "$oldip" ] ; then
        #echo "$newip" | tee -a $oldip_path
        echo "Global IP の変更検出:"$oldip" → "$newip | tee -a  $log_path
        echo "mydns.jpに通知" | tee -a $log_path
        $mydns_update | tee -a $log_path
        echo $newip > $oldip_path

else
        echo "Global IP に変更なし:"$newip | tee -a $log_path
fi

echo "終了" | tee -a $log_path

$ sudo cat /etc/systemd/system/globalip.renew_check.service 
[Unit]
Description=Global IP Renew Check
RefuseManualStart=no
RefuseManualStop=yes

[Service]
Type=oneshot
ExecStart=/home/adeno/BlackCoreEnv/chk_gip.sh
$ sudo cat /etc/systemd/system/globalip.renew_check.timer 
Global IP Renew Check

$ cat /etc/systemd/system/docker-nextcloud.cert.renew.timer 
[Unit]
Description=Docker NextCloud Cert Renew

[Timer]
OnBootSec=3min
OnUnitActiveSec=5m

[Install]
WantedBy=timers.target

Written with StackEdit.

2023年9月24日日曜日

ホームサーバーの環境移行(5)

前回nextcloudの移行で試行錯誤していたら、気づいたら数か月経っていた。経っていたに。

おさらい

事前準備

前回の記事参照

nextcloud用のdockerを作成

drwxr-xr-x 2 nextcloud_docker users  4096  6月 29 00:19 cert
drwxr-xr-x 4           200081 200081 4096  3月 25 10:26 data
-rw-r--r-- 1 nextcloud_docker users   107  2月 27 00:32 db.env
-rw-r--r-- 1 nextcloud_docker root   2106  6月 28 06:02 docker-compose.yml
-rw-r--r-- 1 nextcloud_docker users  7797  6月 29 00:21 nginx.conf
db.env 
---
MYSQL_ROOT_PASSWORD=********
MYSQL_PASSWORD=********
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud
docker-compose.yml 
---
version: '3'

services:
  db:
    image: mariadb:10.5
    command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
    restart: always
    volumes:
      - ./data/db:/var/lib/mysql
    environment:
      - MARIADB_AUTO_UPGRADE=1
      - MARIADB_DISABLE_UPGRADE_BACKUP=1
    env_file:
      - db.env
    ports:
      - 23306:3306

  redis:
    image: redis:alpine
    restart: always

  app:
    image: nextcloud:fpm-alpine
    restart: always
    volumes:
      - ./data/nextcloud:/var/www/html
    environment:
      - MYSQL_HOST=db
      - REDIS_HOST=redis
      - PHP_MEMORY_LIMIT=4096M
      - PHP_UPLOAD_LIMIT=4096M
    env_file:
      - db.env
    depends_on:
      - db
      - redis

  web:
    image: nginx
    restart: always
    ports:
      - 28080:80
      - 20443:443
    volumes:
      - ./data/nextcloud:/var/www/html:ro
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./cert:/etc/letsencrypt/live/******.jp:ro
    depends_on:
      - app

  cron:
    image: nextcloud:fpm-alpine
    restart: always
    volumes:
      - ./data/nextcloud:/var/www/html
    entrypoint: /cron.sh
    depends_on:
      - db
      - redis

volumes:
  db:
  nextcloud:
nginx.conf 
---
worker_processes auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    # Prevent nginx HTTP Server Detection
    server_tokens   off;

    keepalive_timeout  65;

    upstream php-handler {
        server app:9000;
    }

    server {
        listen 80;

	# SSL configuration
	#
	listen 443 ssl default_server;
	#listen [::]:443 ssl default_server;
 	#ssl_certificate /etc/nginx/server.crt;
	#ssl_certificate_key /etc/nginx/server.key;
	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
	ssl_ciphers HIGH:!aNULL:!MD5;
	ssl_certificate     /etc/letsencrypt/live/******.jp/nginx.pem;
	ssl_certificate_key /etc/letsencrypt/live/******.jp/nginx.key;
	
	server_name ******.jp;


        # set max upload size
        client_max_body_size 512M;
        fastcgi_buffers 64 4K;

        # Enable gzip but do not remove ETag headers
        gzip on;
        gzip_vary on;
        gzip_comp_level 4;
        gzip_min_length 256;
        gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
        gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

        # Pagespeed is not supported by Nextcloud, so if your server is built
        # with the `ngx_pagespeed` module, uncomment this line to disable it.
        #pagespeed off;

        # HTTP response headers borrowed from Nextcloud `.htaccess`
        add_header Referrer-Policy                      "no-referrer"   always;
        add_header X-Content-Type-Options               "nosniff"       always;
        add_header X-Download-Options                   "noopen"        always;
        add_header X-Frame-Options                      "SAMEORIGIN"    always;
        add_header X-Permitted-Cross-Domain-Policies    "none"          always;
        add_header X-Robots-Tag                         "none"          always;
        add_header X-XSS-Protection                     "1; mode=block" always;

        # Remove X-Powered-By, which is an information leak
        fastcgi_hide_header X-Powered-By;

        # Path to the root of your installation
        root /var/www/html;

        # Specify how to handle directories -- specifying `/index.php$request_uri`
        # here as the fallback means that Nginx always exhibits the desired behaviour
        # when a client requests a path that corresponds to a directory that exists
        # on the server. In particular, if that directory contains an index.php file,
        # that file is correctly served; if it doesn't, then the request is passed to
        # the front-end controller. This consistent behaviour means that we don't need
        # to specify custom rules for certain paths (e.g. images and other assets,
        # `/updater`, `/ocm-provider`, `/ocs-provider`), and thus
        # `try_files $uri $uri/ /index.php$request_uri`
        # always provides the desired behaviour.
        index index.php index.html /index.php$request_uri;

        # Rule borrowed from `.htaccess` to handle Microsoft DAV clients
        location = / {
            if ( $http_user_agent ~ ^DavClnt ) {
                return 302 /remote.php/webdav/$is_args$args;
            }
        }

        location = /robots.txt {
            allow all;
            log_not_found off;
            access_log off;
        }

        # Make a regex exception for `/.well-known` so that clients can still
        # access it despite the existence of the regex rule
        # `location ~ /(\.|autotest|...)` which would otherwise handle requests
        # for `/.well-known`.
        location ^~ /.well-known {
            # The rules in this block are an adaptation of the rules
            # in `.htaccess` that concern `/.well-known`.

            location = /.well-known/carddav { return 301 /remote.php/dav/; }
            location = /.well-known/caldav  { return 301 /remote.php/dav/; }

            location /.well-known/acme-challenge    { try_files $uri $uri/ =404; }
            location /.well-known/pki-validation    { try_files $uri $uri/ =404; }

            # Let Nextcloud's API for `/.well-known` URIs handle all other
            # requests by passing them to the front-end controller.
            return 301 /index.php$request_uri;
        }

        # Rules borrowed from `.htaccess` to hide certain paths from clients
        location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/)  { return 404; }
        location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console)                { return 404; }

        # Ensure this block, which passes PHP files to the PHP process, is above the blocks
        # which handle static assets (as seen below). If this block is not declared first,
        # then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
        # to the URI, resulting in a HTTP 500 error response.
        location ~ \.php(?:$|/) {
            # Required for legacy support
            rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri;

            fastcgi_split_path_info ^(.+?\.php)(/.*)$;
            set $path_info $fastcgi_path_info;

            try_files $fastcgi_script_name =404;

            include fastcgi_params;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            fastcgi_param PATH_INFO $path_info;
            #fastcgi_param HTTPS on;

            fastcgi_param modHeadersAvailable true;         # Avoid sending the security headers twice
            fastcgi_param front_controller_active true;     # Enable pretty urls
            fastcgi_pass php-handler;

            fastcgi_intercept_errors on;
            fastcgi_request_buffering off;
        }

        location ~ \.(?:css|js|svg|gif)$ {
            try_files $uri /index.php$request_uri;
            expires 6M;         # Cache-Control policy borrowed from `.htaccess`
            access_log off;     # Optional: Don't log access to assets
        }

        location ~ \.woff2?$ {
            try_files $uri /index.php$request_uri;
            expires 7d;         # Cache-Control policy borrowed from `.htaccess`
            access_log off;     # Optional: Don't log access to assets
        }

        # Rule borrowed from `.htaccess`
        location /remote {
            return 301 /remote.php$request_uri;
        }

        location / {
            try_files $uri $uri/ /index.php$request_uri;
        }
    }
}

動作確認

docker-compose -H unix:///run/user/1004/docker.sock up

これで初期状態までは行けた

現行nextcloudからデータ引き継ぎ

現行サーバーからバックアップ作成

dbデータ

  • mysqldump
mysqldump --single-transaction -h localhost -u nextcloud -p nextcloud > nextcloud-sqlbkp.bak

dataファイル

sudo tar zcvf /mnt/4Traid1/nextcloud_data_20230713.tar.gz /mnt/4Traid1/nextcloud/data

バックアップデータを復元してみる

https://docs.nextcloud.com/server/latest/admin_manual/maintenance/index.html
https://docs.nextcloud.com/server/latest/admin_manual/maintenance/migrating.html#

dbデータ

  • import
mysql -h localhost --port=23306 -u root -p nextcloud < nextcloud-sqlbkp.bak 
  • oc_storagesの変更

dataファイル

tar.gzを展開すると

sudo ls -l  /mnt/backuparea/mnt/4Traid1/nextcloud/data/
合計 136064
drwxr-xr-x  7 www-data root          4096  2月 17  2019 admin
-rw-r--r--  1 www-data root             0  2月 12  2022 index.html
drwxr-xr-x  4 www-data www-data      4096  2月 13  2022 kodi
-rw-r-----  1 www-data www-data  14385806  6月 18 19:18 nextcloud.log
-rw-r-----  1 www-data www-data 124743392  2月  7  2020 nextcloud.log.1
drwxr-xr-x  5 www-data www-data      4096  3月  9  2022 pf

のような感じ。 これを/mnt/backuparea/に上書きする

ls -l /mnt/backuparea/nextcloud_test/data/nextcloud/data/
合計 8
-rw-r--r-- 1 200081 200081    0  7月 13 10:36 index.html
-rw-r----- 1 200081 200081 5563  7月 13 10:36 nextcloud.log
sudo mv /mnt/backuparea/nextcloud_test/data/nextcloud/data /mnt/backuparea/nextcloud_test/data/nextcloud/data.org
/mnt/backuparea/mnt/4Traid1/nextcloud$ sudo mv data /mnt/backuparea/nextcloud_test/data/nextcloud/
$ sudo chown 200081 -R /mnt/backuparea/nextcloud_test/data/nextcloud/data
$ sudo chgrp 200081 -R /mnt/backuparea/nextcloud_test/data/nextcloud/data

起動

インストールウィザード

adminはadmin2にした



アップデートが動くのでしばらく待つ



ダッシュボード



セキュリティ&セットアップ警告

一見動いていそうだけど、チェックかけるとたくさん出てきたOrz



occ db:add-missing-indices
nextcloud_docker@blackcore:/mnt/backuparea/nextcloud_test$ docker -H unix:///run/user/1004/docker.sock exec -it nextcloud_test_app_1 sudo -u www-data /bin/php occ db:add-missing-indices
OCI runtime exec failed: exec failed: unable to start container process: exec: "sudo": executable file not found in $PATH: unknown
nextcloud_docker@blackcore:/mnt/backuparea/nextcloud_test$ docker -H unix:///run/user/1004/docker.sock exec -it nextcloud_test_app_1 php occ db:add-missing-indices
Console has to be executed with the user that owns the file config/config.php
Current user id: 0
Owner id of config.php: 82
Try adding 'sudo -u #82' to the beginning of the command (without the single quotes)
If running with 'docker exec' try adding the option '-u 82' to the docker command (without the single quotes)

sudoコマンドが使えない
こちらを参考にして実行するユーザーを選択した
https://qiita.com/tabimoba/items/c5467432d1a635f9ce5b

nextcloud_docker@blackcore:/mnt/backuparea/nextcloud_test$ docker -H unix:///run/user/1004/docker.sock exec -it -u 82 nextcloud_test_app_1 php occ db:add-missing-indices
Check indices of the share table.
Check indices of the filecache table.
Adding additional size index to the filecache table, this can take some time...
Filecache table updated successfully.
Adding additional size index to the filecache table, this can take some time...
Filecache table updated successfully.
Adding additional path index to the filecache table, this can take some time...
Filecache table updated successfully.
Check indices of the twofactor_providers table.
Check indices of the login_flow_v2 table.
Check indices of the whats_new table.
Check indices of the cards table.
Adding cards_abiduri index to the cards table, this can take some time...
cards table updated successfully.
Check indices of the cards_properties table.
Check indices of the calendarobjects_props table.
Adding calendarobject_calid_index index to the calendarobjects_props table, this can take some time...
calendarobjects_props table updated successfully.
Check indices of the schedulingobjects table.
Adding schedulobj_principuri_index index to the schedulingobjects table, this can take some time...
schedulingobjects table updated successfully.
Check indices of the oc_properties table.
Adding properties_path_index index to the oc_properties table, this can take some time...
Adding properties_pathonly_index index to the oc_properties table, this can take some time...
oc_properties table updated successfully.
Check indices of the oc_jobs table.
Adding job_lastcheck_reserved index to the oc_jobs table, this can take some time...
oc_properties table updated successfully.
Check indices of the oc_direct_edit table.
Adding direct_edit_timestamp index to the oc_direct_edit table, this can take some time...
oc_direct_edit table updated successfully.
Check indices of the oc_preferences table.
Adding preferences_app_key index to the oc_preferences table, this can take some time...
oc_properties table updated successfully.
Check indices of the oc_mounts table.
occ db:add-missing-primary-keys
nextcloud_docker@blackcore:/mnt/backuparea/nextcloud_test$ docker -H unix:///run/user/1004/docker.sock exec -it -u 82 nextcloud_test_app_1 php occ db:add-missing-primary-keys
Check primary keys.
Adding primary key to the federated_reshares table, this can take some time...
federated_reshares table updated successfully.
Adding primary key to the systemtag_object_mapping table, this can take some time...
systemtag_object_mapping table updated successfully.
Adding primary key to the comments_read_markers table, this can take some time...
comments_read_markers table updated successfully.
Adding primary key to the collres_resources table, this can take some time...
collres_resources table updated successfully.
Adding primary key to the collres_accesscache table, this can take some time...
collres_accesscache table updated successfully.
Adding primary key to the filecache_extended table, this can take some time...
filecache_extended table updated successfully.
occ db:add-missing-columns
nextcloud_docker@blackcore:/mnt/backuparea/nextcloud_test$ docker -H unix:///run/user/1004/docker.sock exec -it -u 82 nextcloud_test_app_1 php occ db:add-missing-columns
Check columns of the comments table.
Adding additional reference_id column to the comments table, this can take some time...
Comments table updated successfully.
occ db:convert-filecache-bigint
nextcloud_docker@blackcore:/mnt/backuparea/nextcloud_test$ docker -H unix:///run/user/1004/docker.sock exec -it -u 82 nextcloud_test_app_1 php occ db:convert-filecache-bigint
Following columns will be updated:

* federated_reshares.share_id
* filecache.mtime
* filecache.storage_mtime
* filecache_extended.fileid
* files_trash.auto_id
* mounts.storage_id
* mounts.root_id
* mounts.mount_id
* share_external.id
* share_external.parent

This can take up to hours, depending on the number of files in your instance!
Continue with the conversion (y/n)? [n] y

オレオレ証明書からLet’s Encryptへ

今までと同じように、Let’s Encryptを使用させていただく。ありがたや。
80と443を使用するのか・・・。ポート変換もだめ?

https://letsencrypt.org/ja/docs/challenge-types/

https://certbot.eff.org/instructions?ws=nginx&os=ubuntufocal&tab=standard

https://snapcraft.io/docs/installing-snap-on-linux-mint

nginxインストール

ここを参考に
https://zenn.dev/hitoshiro/articles/b8170ec36d1f01

sudo apt install nginx
sudo cat /etc/nginx/conf.d/conf.conf

server{

  listen  80;
  server_name ****.mydns.jp;
  root  /var/www/html;

}

これで、とりあえずのnginxは用意した。

certbotのインストール

https://certbot.eff.org/instructions?ws=nginx&os=ubuntufocal&tab=standard

インストールは上記サイトの通り

sudo certbot --nginx

で、cretbotさんがconfigに追記してくれた

sudo cat /etc/nginx/conf.d/conf.conf

server{
  server_name ****.mydns.jp;
  root  /var/www/html;

  listen 443 ssl; # managed by Certbot
  ssl_certificate /etc/letsencrypt/live/****.mydns.jp/fullchain.pem; # managed by Certbot
  ssl_certificate_key /etc/letsencrypt/live/****.mydns.jp/privkey.pem; # managed by Certbot
  include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}
server{
    if ($host = ****.mydns.jp) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

  listen  80;
  server_name ****.mydns.jp;
  return 404; # managed by Certbot
}

証明書の取得

deno@blackcore:~$ sudo certbot --nginx
Saving debug log to /var/log/letsencrypt/letsencrypt.log

Which names would you like to activate HTTPS for?
We recommend selecting either all domains, or all domains in a VirtualHost/server block.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: ****.mydns.jp
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1
Requesting a certificate for ****.mydns.jp

Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/****.mydns.jp/fullchain.pem
Key is saved at:         /etc/letsencrypt/live/****.mydns.jp/privkey.pem
This certificate expires on 2023-12-22.
These files will be updated when the certificate renews.
Certbot has set up a scheduled task to automatically renew this certificate in the background.

Deploying certificate
Successfully deployed certificate for ****.mydns.jp to /etc/nginx/conf.d/conf.conf
Congratulations! You have successfully enabled HTTPS on https://****.mydns.jp

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
If you like Certbot, please consider supporting our work by:
 * Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
 * Donating to EFF:                    https://eff.org/donate-le
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

自動更新のテスト

sudo certbot renew --dry-run
Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/****.mydns.jp.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Account registered.
Simulating renewal of an existing certificate for ****.mydns.jp

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Congratulations, all simulated renewals succeeded: 
  /etc/letsencrypt/live/****.mydns.jp/fullchain.pem (success)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

次回更新も自動でやってくれるらしい

deno@blackcore:~$ sudo systemctl list-timers
NEXT                        LEFT               LAST                        PASSED             UNIT                         ACTIVATES                     
<略>
Sun 2023-09-24 09:54:00 JST 11h left           n/a                         n/a                snap.certbot.renew.timer     snap.certbot.renew.service

へー便利。

証明書をdocker上のnginxから参照する

良いやり方が思いつかなかったので、単純に変更があったらコピーして所有者変更・コンテナ再起動を実施

cat chk_cert.sh 
#!/bin/bash

#-----------------------------------------------------------------
# certbotが保存するcertの保存先
source_dir="/etc/letsencrypt/live/****.mydns.jp/"

# docker上のnginxの参照先
target_dir="/mnt/backuparea/nextcloud_test/cert/"

# コピー先を参照するユーザー名
target_username="nextcloud_docker"

#-----------------------------------------------------------------


# 更新フラグ
update_required=false

# ファイルの比較と更新
for file in "$source_dir"/*
do
    # echo "$file"
    # ファイル名を抽出
    filename=$(basename "$file")

    # 対応するファイルパスをターゲットディレクトリから取得
    target_file="$target_dir/$filename"

    # ファイルが存在しないか、差分がある場合に更新
    if [ ! -e "$target_file" ] || ! cmp -s "$file" "$target_file"
    then
        cp "$file" "$target_file"
        chown "$target_username" "$target_file"
        # echo "ファイル $filename を更新しました。"
        update_required=true
    fi
done

# 更新がある場合にnginxを再起動する
if [ "$update_required" = true ]
then
    /mnt/backuparea/nextcloud_test/chk_cert_restart.sh
    echo "更新によるリスタートしました。"
else
    echo "更新なし"
fi


cat chk_cert_restart.sh 
#!/bin/bash

#dockerコンテナの再起動
sudo -u nextcloud_docker /bin/bash -c "cd /mnt/backuparea/nextcloud_test && docker-compose -H unix:///run/user/1004/docker.sock restart"

自動更新(コピー)する

今までcronしか使ったことがなかったので、systemdのtimerを使ってみる
https://gamingpc.one/dev/systemd-timer-cheat/
https://wiki.archlinux.jp/index.php/Systemd/タイマー

$ cat /etc/systemd/system/docker-nextcloud.cert.renew.service 
[Unit]
Description=Docker NextCloud Cert Renew
RefuseManualStart=no
RefuseManualStop=yes

[Service]
Type=oneshot
ExecStart=/mnt/backuparea/nextcloud_test/chk_cert.sh
$ cat /etc/systemd/system/docker-nextcloud.cert.renew.timer 
[Unit]
Description=Docker NextCloud Cert Renew

[Timer]
OnBootSec=5min
OnUnitActiveSec=1d

[Install]
WantedBy=timers.target

有効にするには

sudo systemctl daemon-reload
sudo systemctl enable docker-nextcloud.cert.renew.timer
sudo systemctl start docker-nextcloud.cert.renew.timer
sudo systemctl list-timers
NEXT                        LEFT               LAST                        PASSED             UNIT                              ACTIVATES                          
(略)
Mon 2023-09-25 08:49:01 JST 23h left           Sun 2023-09-24 08:49:01 JST 2min 7s ago        docker-nextcloud.cert.renew.timer docker-nextcloud.cert.renew.service

これでOKなはず!

2023年7月26日水曜日

ホームサーバーの環境移行(4)

気が付いたら5か月経っていた。早くね。
2月にやったことをちゃんとまとめていなかった報いか。

現状

  1. samba
  2. gogs
  3. MariaDB
  4. webmin

今回の対象

  • nextcolud

nextcloudの引っ越し

ここを参考にする
https://docs.nextcloud.com/server/latest/admin_manual/maintenance/index.html
https://hub.docker.com/_/nextcloud
https://blog.seigo2016.com/blog/h-blxsnew_s

事前準備

ユーザー作成

  • ユーザー:nextcloud_docker 1004
  • グループ:nextcloud-rtls-docker 200999

サブ UID/サブ GIDの設定

$ cat /etc/subuid
nextcloud_docker:200000:65536

$ cat /etc/subgid
nextcloud_docker:200000:65536

インストール

nextcloud_docker@blackcore:~$ dockerd-rootless-setuptool.sh install
[INFO] systemd not detected, dockerd-rootless.sh needs to be started manually:

PATH=/usr/bin:/sbin:/usr/sbin:$PATH dockerd-rootless.sh 

[INFO] CLI context "rootless" already exists
[INFO] Use CLI context "rootless"
Current context is now "rootless"

[INFO] Make sure the following environment variables are set (or add them to ~/.bashrc):

# WARNING: systemd not found. You have to remove XDG_RUNTIME_DIR manually on every logout.
export XDG_RUNTIME_DIR=/home/nextcloud_docker/.docker/run
export PATH=/usr/bin:$PATH
Some applications may require the following environment variable too:
export DOCKER_HOST=unix:///home/nextcloud_docker/.docker/run/docker.sock
nextcloud_docker@blackcore:~$ cat .config/systemd/user/docker.service 
[Unit]
Description=Docker Application Container Engine (Rootless)
Documentation=https://docs.docker.com/go/rootless/

[Service]
Environment=PATH=/home/nextcloud_docker/bin:/sbin:/usr/sbin:/home/nextcloud_docker/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
ExecStart=/bin/dockerd-rootless.sh
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
Type=notify
NotifyAccess=all
KillMode=mixed

[Install]
WantedBy=default.target

設定

loginctl enable-linger nextcloud_docker
nextcloud_docker@blackcore:~$ XDG_RUNTIME_DIR=/run/user/$(id -u nextcloud_docker) systemctl --user daemon-reload
nextcloud_docker@blackcore:~$ XDG_RUNTIME_DIR=/run/user/$(id -u nextcloud_docker) systemctl --user start docker
nextcloud_docker@blackcore:~$ XDG_RUNTIME_DIR=/run/user/$(id -u nextcloud_docker) systemctl --user status docker
● docker.service - Docker Application Container Engine (Rootless)
     Loaded: loaded (/home/nextcloud_docker/.config/systemd/user/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2023-02-22 06:07:22 JST; 3s ago
       Docs: https://docs.docker.com/go/rootless/
   Main PID: 912102 (rootlesskit)
      Tasks: 54
     Memory: 45.7M
        CPU: 205ms
     CGroup: /user.slice/user-1004.slice/user@1004.service/app.slice/docker.service
             ├─912102 rootlesskit --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run --propagation=rslave /bin/dockerd-rootless.sh
             ├─912113 /proc/self/exe --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run --propagation=rslave /bin/dockerd-rootless.sh
             ├─912132 slirp4netns --mtu 65520 -r 3 --disable-host-loopback --enable-sandbox --enable-seccomp 912113 tap0
             ├─912140 dockerd
             └─912166 containerd --config /run/user/1004/docker/containerd/containerd.toml --log-level info

 2月 22 06:07:22 blackcore dockerd-rootless.sh[912140]: time="2023-02-22T06:07:22.095812655+09:00" level=warning msg="WARNING: No io.max (wbps) support"
 2月 22 06:07:22 blackcore dockerd-rootless.sh[912140]: time="2023-02-22T06:07:22.095815109+09:00" level=warning msg="WARNING: No io.max (riops) support"
 2月 22 06:07:22 blackcore dockerd-rootless.sh[912140]: time="2023-02-22T06:07:22.095817634+09:00" level=warning msg="WARNING: No io.max (wiops) support"
 2月 22 06:07:22 blackcore dockerd-rootless.sh[912140]: time="2023-02-22T06:07:22.095820449+09:00" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
 2月 22 06:07:22 blackcore dockerd-rootless.sh[912140]: time="2023-02-22T06:07:22.095823155+09:00" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
 2月 22 06:07:22 blackcore dockerd-rootless.sh[912140]: time="2023-02-22T06:07:22.095831981+09:00" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
 2月 22 06:07:22 blackcore dockerd-rootless.sh[912140]: time="2023-02-22T06:07:22.095857399+09:00" level=info msg="Daemon has completed initialization"
 2月 22 06:07:22 blackcore dockerd-rootless.sh[912140]: time="2023-02-22T06:07:22.104160647+09:00" level=info msg="[core] [Server #10] Server created" module=grpc
 2月 22 06:07:22 blackcore systemd[892130]: Started Docker Application Container Engine (Rootless).
 2月 22 06:07:22 blackcore dockerd-rootless.sh[912140]: time="2023-02-22T06:07:22.110857925+09:00" level=info msg="API listen on /run/user/1004/docker.sock"

XDG_RUNTIME_DIR=/run/user/$(id -u nextcloud_docker) systemctl --user enable docker

動作確認

hello-worldが動くかでテスト

nextcloud_docker@blackcore:~$ docker -H unix:///run/user/1004/docker.sock run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
2db29710123e: Pull complete 
Digest: sha256:6e8b6f026e0b9c419ea0fd02d3905dd0952ad1feea67543f525c73a0a790fefb
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

nextcloud用のdocker-composeを作成

nextcloud_docker@blackcore:/mnt/backuparea/nextcloud$ cat db.env
MYSQL_ROOT_PASSWORD=****************
MYSQL_PASSWORD=****************
MYSQL_DATABASE=nextcloud
MYSQL_USER=nextcloud

nextcloud_docker@blackcore:/mnt/backuparea/nextcloud$ cat docker-compose.yml 
version: '3'

services:
  db:
    image: mariadb:10.5
    command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
    restart: always
    volumes:
      - db:/var/lib/mysql
    environment:
      - MARIADB_AUTO_UPGRADE=1
      - MARIADB_DISABLE_UPGRADE_BACKUP=1
    env_file:
      - db.env
    ports:
      - 23306:3306

  redis:
    image: redis:alpine
    restart: always

  app:
    image: nextcloud:fpm-alpine
    restart: always
    volumes:
      - nextcloud:/var/www/html
    environment:
      - MYSQL_HOST=db
      - REDIS_HOST=redis
    env_file:
      - db.env
    depends_on:
      - db
      - redis

  web:
    image: nginx
    restart: always
    ports:
      - 28080:80
    volumes:
      - nextcloud:/var/www/html:ro
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - app

  cron:
    image: nextcloud:fpm-alpine
    restart: always
    volumes:
      - nextcloud:/var/www/html
    entrypoint: /cron.sh
    depends_on:
      - db
      - redis

volumes:
  db:
  nextcloud:

あとは

nextcloud_docker@blackcore:/mnt/backuparea/nextcloud$ cat nginx.conf 
worker_processes auto;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    # Prevent nginx HTTP Server Detection
    server_tokens   off;

    keepalive_timeout  65;

    #gzip  on;

    upstream php-handler {
        server app:9000;
    }

    server {
        listen 80;

        # HSTS settings
        # WARNING: Only add the preload option once you read about
        # the consequences in https://hstspreload.org/. This option
        # will add the domain to a hardcoded list that is shipped
        # in all major browsers and getting removed from this list
        # could take several months.
        #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;

        # set max upload size
        client_max_body_size 512M;
        fastcgi_buffers 64 4K;

        # Enable gzip but do not remove ETag headers
        gzip on;
        gzip_vary on;
        gzip_comp_level 4;
        gzip_min_length 256;
        gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
        gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;

        # Pagespeed is not supported by Nextcloud, so if your server is built
        # with the `ngx_pagespeed` module, uncomment this line to disable it.
        #pagespeed off;

        # HTTP response headers borrowed from Nextcloud `.htaccess`
        add_header Referrer-Policy                      "no-referrer"   always;
        add_header X-Content-Type-Options               "nosniff"       always;
        add_header X-Download-Options                   "noopen"        always;
        add_header X-Frame-Options                      "SAMEORIGIN"    always;
        add_header X-Permitted-Cross-Domain-Policies    "none"          always;
        add_header X-Robots-Tag                         "none"          always;
        add_header X-XSS-Protection                     "1; mode=block" always;

        # Remove X-Powered-By, which is an information leak
        fastcgi_hide_header X-Powered-By;

        # Path to the root of your installation
        root /var/www/html;

        # Specify how to handle directories -- specifying `/index.php$request_uri`
        # here as the fallback means that Nginx always exhibits the desired behaviour
        # when a client requests a path that corresponds to a directory that exists
        # on the server. In particular, if that directory contains an index.php file,
        # that file is correctly served; if it doesn't, then the request is passed to
        # the front-end controller. This consistent behaviour means that we don't need
        # to specify custom rules for certain paths (e.g. images and other assets,
        # `/updater`, `/ocm-provider`, `/ocs-provider`), and thus
        # `try_files $uri $uri/ /index.php$request_uri`
        # always provides the desired behaviour.
        index index.php index.html /index.php$request_uri;

        # Rule borrowed from `.htaccess` to handle Microsoft DAV clients
        location = / {
            if ( $http_user_agent ~ ^DavClnt ) {
                return 302 /remote.php/webdav/$is_args$args;
            }
        }

        location = /robots.txt {
            allow all;
            log_not_found off;
            access_log off;
        }

        # Make a regex exception for `/.well-known` so that clients can still
        # access it despite the existence of the regex rule
        # `location ~ /(\.|autotest|...)` which would otherwise handle requests
        # for `/.well-known`.
        location ^~ /.well-known {
            # The rules in this block are an adaptation of the rules
            # in `.htaccess` that concern `/.well-known`.

            location = /.well-known/carddav { return 301 /remote.php/dav/; }
            location = /.well-known/caldav  { return 301 /remote.php/dav/; }

            location /.well-known/acme-challenge    { try_files $uri $uri/ =404; }
            location /.well-known/pki-validation    { try_files $uri $uri/ =404; }

            # Let Nextcloud's API for `/.well-known` URIs handle all other
            # requests by passing them to the front-end controller.
            return 301 /index.php$request_uri;
        }

        # Rules borrowed from `.htaccess` to hide certain paths from clients
        location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/)  { return 404; }
        location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console)                { return 404; }

        # Ensure this block, which passes PHP files to the PHP process, is above the blocks
        # which handle static assets (as seen below). If this block is not declared first,
        # then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
        # to the URI, resulting in a HTTP 500 error response.
        location ~ \.php(?:$|/) {
            # Required for legacy support
            rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri;

            fastcgi_split_path_info ^(.+?\.php)(/.*)$;
            set $path_info $fastcgi_path_info;

            try_files $fastcgi_script_name =404;

            include fastcgi_params;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            fastcgi_param PATH_INFO $path_info;
            #fastcgi_param HTTPS on;

            fastcgi_param modHeadersAvailable true;         # Avoid sending the security headers twice
            fastcgi_param front_controller_active true;     # Enable pretty urls
            fastcgi_pass php-handler;

            fastcgi_intercept_errors on;
            fastcgi_request_buffering off;
        }

        location ~ \.(?:css|js|svg|gif)$ {
            try_files $uri /index.php$request_uri;
            expires 6M;         # Cache-Control policy borrowed from `.htaccess`
            access_log off;     # Optional: Don't log access to assets
        }

        location ~ \.woff2?$ {
            try_files $uri /index.php$request_uri;
            expires 7d;         # Cache-Control policy borrowed from `.htaccess`
            access_log off;     # Optional: Don't log access to assets
        }

        # Rule borrowed from `.htaccess`
        location /remote {
            return 301 /remote.php$request_uri;
        }

        location / {
            try_files $uri $uri/ /index.php$request_uri;
        }
    }
}

実行してみる

nextcloud_docker@blackcore:/mnt/backuparea/nextcloud$ docker-compose -H unix:///run/user/1004/docker.sock up
Starting nextcloud_db_1    ... done
Starting nextcloud_redis_1 ... done
Starting nextcloud_cron_1  ... done
Starting nextcloud_app_1   ... done
Starting nextcloud_web_1   ... done
Attaching to nextcloud_db_1, nextcloud_redis_1, nextcloud_app_1, nextcloud_cron_1, nextcloud_web_1
db_1     | 2023-03-16 20:30:42+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.5.19+maria~ubu2004 started.

動いた!


バックアップデータを復元してみる

https://docs.nextcloud.com/server/latest/admin_manual/maintenance/index.html
https://docs.nextcloud.com/server/latest/admin_manual/maintenance/migrating.html#

dbデータ

  • import
mysql -h localhost --port=23306 -u root -p nextcloud < nextcloud-sqlbkp.bak 
  • oc_storagesの変更

dataファイル

  • コピー
sudo cp -r nextcloud/ /mnt/backuparea/nextcloud/data/
sudo chown 200081 -R /mnt/backuparea/nextcloud/data/
sudo chgrp 200081 -R /mnt/backuparea/nextcloud/data/
  • パーミッション
adeno@blackcore:/mnt/backuparea/nextcloud$ ls -l data/nextcloud/data
合計 135988
drwxr-xr-x  7 200081 extcloud-rtls-docker      4096  2月 17  2019 admin
(略)
  • 備忘録
    最終手段でシンボリックリンクで対応したと思ったけど、何だっけ??

微調整

セキュリティ&セットアップ警告

HTTPSの対応

Traefikを試してみる?

https://coders-shelf.com/traefik-intro/
https://qiita.com/adwin/items/ccc34ef5f4c88d8fa02c#おまけ-https-化も楽勝

cat traefik.yml 
------------------------------------------------------------
api:
  insecure: true # WebUI にアクセスできるように設定
  dashboard: true

entryPoints:
  http:
    address: ":80"

  https:
    address: ":20443"

providers:
  docker:
#    network: sample_traefik
    exposedByDefault: false

cat docker-compose.yml 
------------------------------------------------------------
ersion: '3'

services:
  db:
    image: mariadb:10.5
    command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
    restart: always
    volumes:
      - ./data/db:/var/lib/mysql
    environment:
      - MARIADB_AUTO_UPGRADE=1
      - MARIADB_DISABLE_UPGRADE_BACKUP=1
    env_file:
      - db.env
    ports:
      - 23306:3306

  redis:
    image: redis:alpine
    restart: always

  app:
    image: nextcloud:fpm-alpine
    restart: always
    volumes:
      - ./data/nextcloud:/var/www/html
    environment:
      - MYSQL_HOST=db
      - REDIS_HOST=redis
    env_file:
      - db.env
    depends_on:
      - db
      - redis

  web:
    image: nginx
    restart: always
    ports:
      - 28080:80
    volumes:
      - ./data/nextcloud:/var/www/html:ro
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - app
    labels:
      - traefik.enable=true
      - traefik.http.routers.servicename.rule=Host(`blackcore.local`)
      - traefik.http.routers.servicename.entrypoints=https
      - traefik.http.routers.servicename.tls=true

  cron:
    image: nextcloud:fpm-alpine
    restart: always
    volumes:
      - ./data/nextcloud:/var/www/html
    entrypoint: /cron.sh
    depends_on:
      - db
      - redis

  traefik:
    image: traefik
    ports:
      - 28081:8080
      - 28082:80
      - 20443:20443
    volumes:
      - /run/user/1004/docker.sock:/var/run/docker.sock:ro
      - ./traefik.yml:/etc/traefik/traefik.yml

volumes:
  db:
  nextcloud:

httpsでのアクセスで、404 not foundとなってしまう。

nginxでhttpsを

そもそもリバースプロキシではなくて、nginxでhttps対応ができればやりたいことはできる。
nginx.confに以下を追加

cat nginx.conf
---
	# SSL configuration
	#
	listen 443 ssl default_server;
	listen [::]:443 ssl default_server;
 	#ssl_certificate /etc/nginx/server.crt;
	#ssl_certificate_key /etc/nginx/server.key;
	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
	ssl_ciphers HIGH:!aNULL:!MD5;
	ssl_certificate     /etc/letsencrypt/live/*****.jp/fullchain.pem;
	ssl_certificate_key /etc/letsencrypt/live/*****.jp/privkey.pem;

長くなってきたし、時間も空いてしまってよくわからなくなってきたので、1回整理したい。

2023年2月16日木曜日

ホームサーバーの環境移行(3)

仮想化の検討

今後の可用性も考えて、サーバーに載せているサービスの仮想化やコンテナ化を考えてみる。
対象はなんだろう

  • samba
  • gogs
  • nextcloud
  • mydns/グローバルIP監視
  • ログインログアウト監視(Slack連携)
  • MariaDB
  • webmin

gogs

まずはgogs

現状

よかったメモしておいて
https://continue-to-challenge.blogspot.com/search?q=gogs

adeno@blackcube:/home/git$ systemctl status gogs
● gogs.service - Gogs (Go Git Service)
   Loaded: loaded (/etc/systemd/system/gogs.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2023-01-23 04:00:09 JST; 1h 47min ago
 Main PID: 2548 (gogs)
    Tasks: 8 (limit: 4915)
   CGroup: /system.slice/gogs.service
           └─2548 /home/git/gogs/gogs web

 1月 23 05:43:38 blackcube gogs[2548]: [Macaron] 2023-01-23 05:43:38: Started GET /admin/config for 192.168.1.37
 1月 23 05:43:38 blackcube gogs[2548]: [Macaron] 2023-01-23 05:43:38: Completed GET /admin/config 200 OK in 21.32905ms
 1月 23 05:43:38 blackcube gogs[2548]: [Macaron] 2023-01-23 05:43:38: Started GET /assets/font-awesome-4.6.3/fonts/fontawesome-webfont.woff2?
 1月 23 05:43:38 blackcube gogs[2548]: [Macaron] [Static] Serving /assets/font-awesome-4.6.3/fonts/fontawesome-webfont.woff2
 1月 23 05:43:38 blackcube gogs[2548]: [Macaron] 2023-01-23 05:43:38: Started GET /img/favicon.png for 192.168.1.37
 1月 23 05:43:38 blackcube gogs[2548]: [Macaron] [Static] Serving /img/favicon.png
 1月 23 05:43:38 blackcube gogs[2548]: [Macaron] 2023-01-23 05:43:38: Completed GET /img/favicon.png 200 OK in 2.466123ms
 1月 23 05:43:38 blackcube gogs[2548]: [Macaron] 2023-01-23 05:43:38: Completed GET /assets/font-awesome-4.6.3/fonts/fontawesome-webfont.woff
 1月 23 05:43:40 blackcube gogs[2548]: [Macaron] 2023-01-23 05:43:40: Started GET /admin/repos for 192.168.1.37
 1月 23 05:43:40 blackcube gogs[2548]: [Macaron] 2023-01-23 05:43:40: Completed GET /admin/repos 200 OK in 107.448955ms

たぶん、ここを参考にservice化したのだろう
https://github.com/gogs/gogs/blob/main/scripts/systemd/gogs.service

adeno@blackcube:/home/git$ cat /etc/systemd/system/gogs.service
[Unit]
Description=Gogs (Go Git Service)
After=syslog.target
After=network.target
After=mysqld.service

[Service]
# Modify these two values and uncomment them if you have
# repos with lots of files and get an HTTP error 500 because
# of that
###
#LimitMEMLOCK=infinity
#LimitNOFILE=65535
Type=simple
User=git
Group=git
WorkingDirectory=/home/git/gogs
ExecStart=/home/git/gogs/gogs web
Restart=always
Environment=USER=git HOME=/home/git

[Install]
WantedBy=multi-user.target

dockerでやってみる

https://github.com/gogs/gogs/tree/main/docker

sudo docker pull gogs/gogs
mkdir -p /mnt/workarea/gogs
sudo docker run --name=gogs -p 10022:22 -p 3000:3000 -v /mnt/workarea/gogs:/data gogs/gogs

久しぶりのdockerで使い方忘れてる
あと、gogsのデータの引っ越しはどうやるんだっけ?

adeno@blackcore:~$ sudo docker ps -a
[sudo] adeno のパスワード:          
CONTAINER ID   IMAGE       COMMAND                  CREATED      STATUS                  PORTS     NAMES
5c8dba490d7e   gogs/gogs   "/app/gogs/docker/st…"   9 days ago   Exited (0) 8 days ago             gogs

データの移行

https://github.com/gogs/gogs/discussions/6876

./gogs backup

で、書き出す。gogs-backup-20230123060827.zipが生成された。
/mnt/workarea/gogsに保存すると、docker内では/dataからアクセスできる。

adeno@blackcore:~$ sudo docker exec -it gogs /bin/bash
bash-5.1# ls
data    docker  gogs    log
bash-5.1# ./gogs -v
Gogs version 0.13.0+dev
bash-5.1# ls data/
gogs-backup-20230123060827.zip  gogs.db                         sessions
bash-5.1# 
bash-5.1# ./gogs restore --from="data/gogs-backup-20230123060827.zip" 
2023/01/31 16:28:35 [ INFO] Restoring backup from: data/gogs-backup-20230123060827.zip
2023/01/31 16:28:38 [FATAL] [gogs.io/gogs/gogs.go:40 main()] Failed to start application: init configuration: user configured to run Gogs is "git", but the current user is "root"
bash-5.1# 

カレントユーザーがrootになっているので、ユーザーgitで実行する

docker-compose

なんか難しそうなので、docker-composeを使ってみる

sudo apt install docker-compose

version: '3'
services:
  gogs:
    image: gogs/gogs:latest
    container_name: gogs
    restart: always
    ports:
      - 3000:3000
    volumes:
      - ./data:/data
    links:
      - mariadb:db

  mariadb:
    image: mariadb:latest
    restart: always
    ports:
      - 13306:3306
    environment:
      - MARIADB_ROOT_PASSWORD=************
      - MARIADB_DATABASE=gogs
      - MARIADB_USER=gogs
      - MARIADB_PASSWORD=************

    volumes:
      - ./mariadb/data:/var/lib/mysql
      - ./mariadb/my.cnf:/etc/mysql/conf.d/my.cnf
      - ./mariadb/sql:/docker-entrypoint-initdb.d

sudo docker-compose up -d
sudo docker-compose ps
sudo docker-compose stop

https://qiita.com/wasanx25/items/d47caf37b79e855af95f
https://mebee.info/2020/08/05/post-15924/

データの引っ越し

  • データベース
  • gogs-repositories
  • config
データベース
mysqldump -u git -p gogs_git > gogs.sql.bak
mysql -u gogs -p gogs --port=13306 < /home/adeno/gogs.sql.bak 
gogs-repositories

data/gogs/data/gogs-repositoriesにコピー

config
[repository]
ROOT = /app/gogs/data/gogs-repositories

結局

gogs backup

を使わなかった。

初期設定でのデーターベース設定は以下を参考にした。
https://mebee.info/2020/08/05/post-15924/

ホスト名をgogs_mariadb_1にする

adeno@blackcore:/mnt/backuparea/gogs$ sudo  docker-compose ps
     Name                   Command                  State                      Ports               
----------------------------------------------------------------------------------------------------
gogs             /app/gogs/docker/start.sh  ...   Up (healthy)   22/tcp, 0.0.0.0:3000-              
                                                                 >3000/tcp,:::3000->3000/tcp        
gogs_mariadb_1   docker-entrypoint.sh mariadbd    Up             0.0.0.0:13306->3306/tcp,:::13306-  
                                                                 >3306/tcp                          

rootで実行される

気になる。
rootで実行するし、作成されるファイルも所有者はroot
でもコンテナ内はgitになっている。

adeno@blackcore:/mnt/backuparea/gogs$ ls -l data/gogs/data/
合計 12
drwxr-xr-x 6 root root 4096  2月  6 12:34 gogs
drwxr-xr-x 7 root root 4096  2月  6 12:34 gogs-repositories
drwx------ 3 root root 4096  2月  6 12:34 sessions
sudo docker exec -it gogs /bin/bash
0f3d4c05ae00:/app/gogs# ls -l data/
total 12
drwxrwxr-x    6 git      git           4096 Feb  5 15:59 gogs
drwxr-xr-x    7 git      git           4096 Feb  5 15:59 gogs-repositories
drwx------    4 git      git           4096 Feb  6 03:35 sessions

https://qiita.com/yitakura731/items/36a2ba117ccbc8792aa7

気になる。

なにか方法があるのか

  • rootless
  • rootless + SELinux
  • Podman

https://e-penguiner.com/rootless-docker-for-nonroot/
https://matsuand.github.io/docs.docker.jp.onthefly/engine/security/rootless/
https://matsuand.github.io/docs.docker.jp.onthefly/engine/security/userns-remap/
https://docs.docker.jp/desktop/install/linux-install.html#linux-install-file-sharing

docker rootlessを試してみる

いろいろ試行錯誤したのを一旦整理

まずは普通のインストール

https://docs.docker.com/engine/install/ubuntu/

以下の手順のみLinux Mintでは異なるので注意

3.Use the following command to set up the repository:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
$ lsb_release -cs
vanessa

UbuntuのOSバージョンを表示させるにはUBUNTU_CODENAMEが必要

$ cat /etc/os-release 
NAME="Linux Mint"
VERSION="21 (Vanessa)"
ID=linuxmint
ID_LIKE="ubuntu debian"
PRETTY_NAME="Linux Mint 21"
VERSION_ID="21"
HOME_URL="https://www.linuxmint.com/"
SUPPORT_URL="https://forums.linuxmint.com/"
BUG_REPORT_URL="http://linuxmint-troubleshooting-guide.readthedocs.io/en/latest/"
PRIVACY_POLICY_URL="https://www.linuxmint.com/"
VERSION_CODENAME=vanessa
UBUNTU_CODENAME=jammy

なので

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  jammy stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

としちゃう。

動作確認

これで、普通モードの動作確認ができる

sudo docker run hello-world

rootlessモード

ユーザーを作る

  • ユーザー:gogs_docker 1003
  • グループ:gogs-rtls-docker 10099

サブ UID/サブ GIDの設定

$ cat /etc/subuid
gogs_docker:100000:65536

$ cat /etc/subgid
gogs_docker:100000:65536

インストール

https://matsuand.github.io/docs.docker.jp.onthefly/engine/security/rootless/

gogs_docker@blackcore:~$ dockerd-rootless-setuptool.sh install
[INFO] systemd not detected, dockerd-rootless.sh needs to be started manually:

PATH=/home/gogs_docker/bin:/sbin:/usr/sbin:$PATH dockerd-rootless.sh 

[INFO] Creating CLI context "rootless"
Successfully created context "rootless"
[INFO] Use CLI context "rootless"
Current context is now "rootless"
Warning: DOCKER_HOST environment variable overrides the active context. To use "rootless", either set the global --context flag, or unset DOCKER_HOST environment variable.

[INFO] Make sure the following environment variables are set (or add them to ~/.bashrc):

# WARNING: systemd not found. You have to remove XDG_RUNTIME_DIR manually on every logout.
export XDG_RUNTIME_DIR=/home/gogs_docker/.docker/run
export PATH=/home/gogs_docker/bin:$PATH
Some applications may require the following environment variable too:
export DOCKER_HOST=unix:///home/gogs_docker/.docker/run/docker.sock

.bashrcに書くのを忘れない

export XDG_RUNTIME_DIR=/home/gogs_docker/.docker/run
export PATH=/home/gogs_docker/bin:$PATH

動作確認

gogs_docker@blackcore:~$ systemctl --user start docker
Failed to connect to bus: そのようなファイルやディレクトリはありません

gogs_docker@blackcore:~$ systemctl --user status
Failed to connect to bus: そのようなファイルやディレクトリはありません

あれ?

XDG_RUNTIME_DIR=/run/user/$(id -u gogs_docker) systemctl --user status
● blackcore
    State: degraded
     Jobs: 0 queued
   Failed: 2 units
    Since: Sun 2023-02-12 01:52:28 JST; 9h ago
   CGroup: /user.slice/user-1003.slice/user@1003.service
<<略>>

XDG_RUNTIME_DIR=/run/user/$(id -u gogs_docker) systemctl --user start docker
Failed to start docker.service: Unit docker.service not found.

ほう。docker.serviceがないのか・・・。

docker.serviceを手動で作る

仕方がないので
.config/systemd/user/docker.service
を手動で作成した。

[Unit]
Description=Docker Application Container Engine (Rootless)
Documentation=https://docs.docker.com/go/rootless/

[Service]
Environment=PATH=/home/gogs_docker/bin:/sbin:/usr/sbin:/home/gogs_docker/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
ExecStart=/home/gogs_docker/bin/dockerd-rootless.sh 
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
Type=notify
NotifyAccess=all
KillMode=mixed

[Install]
WantedBy=default.target

今度こそ
まずはsudoできるユーザーで確認

adeno@blackcore:~$ sudo -u gogs_docker XDG_RUNTIME_DIR=/run/user/$(id -u gogs_docker) systemctl --user status
[sudo] adeno のパスワード:          
● blackcore
    State: running
     Jobs: 0 queued
   Failed: 0 units
    Since: Sun 2023-02-12 12:56:38 JST; 8h ago
   CGroup: /user.slice/user-1003.slice/user@1003.service
           ├─session.slice 
           │ └─pipewire.service 
           │   └─1188 /usr/bin/pipewire
           ├─app.slice 
           │ ├─docker.service 
           │ │ ├─16301 rootlesskit --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin>
           │ │ ├─16310 /proc/self/exe --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=buil>
           │ │ ├─16328 slirp4netns --mtu 65520 -r 3 --disable-host-loopback --enable-sandbox --enable-seccomp 16310 tap0
           │ │ ├─16336 dockerd
           │ │ └─16363 containerd --config /run/user/1003/docker/containerd/containerd.toml --log-level info
           │ └─dbus.service 
           │   └─1234 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
           └─init.scope 
             ├─1140 /lib/systemd/systemd --user
             └─1149 (sd-pam)

sudo -u gogs_docker XDG_RUNTIME_DIR=/run/user/$(id -u gogs_docker) systemctl --user start docker

よし。
次に、dockerを実行したい一般ユーザーで確認

XDG_RUNTIME_DIR=/run/user/$(id -u gogs_docker) systemctl --user status
● blackcore
    State: running
     Jobs: 0 queued
   Failed: 0 units
    Since: Sun 2023-02-12 12:56:38 JST; 8h ago
   CGroup: /user.slice/user-1003.slice/user@1003.service
           ├─session.slice 
           │ └─pipewire.service 
           │   └─1188 /usr/bin/pipewire
           ├─app.slice 
           │ ├─docker.service 
           │ │ ├─16301 rootlesskit --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin>
           │ │ ├─16310 /proc/self/exe --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=buil>
           │ │ ├─16328 slirp4netns --mtu 65520 -r 3 --disable-host-loopback --enable-sandbox --enable-seccomp 16310 tap0
           │ │ ├─16336 dockerd
           │ │ └─16363 containerd --config /run/user/1003/docker/containerd/containerd.toml --log-level info
           │ └─dbus.service 
           │   └─1234 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
           └─init.scope 
             ├─1140 /lib/systemd/systemd --user
             └─1149 (sd-pam)

良いね。状態取れた。
サンプルを実行してみる

$ docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

よし。

gogs_docker@blackcore:/mnt/backuparea/gogs_rootless$ docker-compose up
gogs_rootless_mariadb_1 is up-to-date
Starting gogs ... done
Attaching to gogs_rootless_mariadb_1, gogs
<<略>>

OK gogs動いた!

自動起動

 systemctl --user enable docker
 sudo loginctl enable-linger $(whoami)
  • メモ
XDG_RUNTIME_DIR=/run/user/$(id -u gogs_docker) systemctl --user enable docker
Created symlink /home/gogs_docker/.config/systemd/user/default.target.wants/docker.service → /home/gogs_docker/.config/systemd/user/docker.service.

が、再起動後にPSで実行中のコンテナが見れなくなってしまった。

gogs_docker@blackcore:/mnt/backuparea/gogs_rootless$ docker ps -a
Cannot connect to the Docker daemon at unix:///home/gogs_docker/.docker/run/docker.sock. Is the docker daemon running?

sockの場所を明示すると動いた

docker -H unix:///run/user/1003/docker.sock ps
CONTAINER ID   IMAGE              COMMAND                   CREATED       STATUS                 PORTS                                               NAMES
c6a2687906f9   gogs/gogs:latest   "/app/gogs/docker/st…"   3 hours ago   Up 3 hours (healthy)   22/tcp, 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp   gogs
2b74ddf55d44   mariadb:latest     "docker-entrypoint.s…"   3 hours ago   Up 3 hours             0.0.0.0:13306->3306/tcp, :::13306->3306/tcp         gogs_rootless_mariadb_1
$ docker-compose ps
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 699, in urlopen
    httplib_response = self._make_request(
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 394, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.10/http/client.py", line 1282, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1328, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1277, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1037, in _send_output
    self.send(msg)
  File "/usr/lib/python3.10/http/client.py", line 975, in send
    self.connect()
  File "/usr/lib/python3/dist-packages/docker/transport/unixconn.py", line 30, in connect
    sock.connect(self.unix_socket)
FileNotFoundError: [Errno 2] No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 439, in send
    resp = conn.urlopen(
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 755, in urlopen
    retries = retries.increment(
  File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 532, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/lib/python3/dist-packages/six.py", line 718, in reraise
    raise value.with_traceback(tb)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 699, in urlopen
    httplib_response = self._make_request(
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 394, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.10/http/client.py", line 1282, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1328, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1277, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1037, in _send_output
    self.send(msg)
  File "/usr/lib/python3.10/http/client.py", line 975, in send
    self.connect()
  File "/usr/lib/python3/dist-packages/docker/transport/unixconn.py", line 30, in connect
    sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 214, in _retrieve_server_version
    return self.version(api_version=False)["ApiVersion"]
  File "/usr/lib/python3/dist-packages/docker/api/daemon.py", line 181, in version
    return self._result(self._get(url), json=True)
  File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 46, in inner
    return f(self, *args, **kwargs)
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 237, in _get
    return self.get(url, **self._set_request_timeout(kwargs))
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 555, in get
    return self.request('GET', url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 542, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 655, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 498, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/docker-compose", line 33, in <module>
    sys.exit(load_entry_point('docker-compose==1.29.2', 'console_scripts', 'docker-compose')())
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 81, in main
    command_func()
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 200, in perform_command
    project = project_from_options('.', options)
  File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 60, in project_from_options
    return get_project(
  File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 152, in get_project
    client = get_client(
  File "/usr/lib/python3/dist-packages/compose/cli/docker_client.py", line 41, in get_client
    client = docker_client(
  File "/usr/lib/python3/dist-packages/compose/cli/docker_client.py", line 170, in docker_client
    client = APIClient(use_ssh_client=not use_paramiko_ssh, **kwargs)
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 197, in __init__
    self._version = self._retrieve_server_version()
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 221, in _retrieve_server_version
    raise DockerException(
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
Error in sys.excepthook:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 153, in apport_excepthook
    with os.fdopen(os.open(pr_filename,
FileNotFoundError: [Errno 2] No such file or directory: '/var/crash/_usr_bin_docker-compose.1003.crash'

Original exception was:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 699, in urlopen
    httplib_response = self._make_request(
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 394, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.10/http/client.py", line 1282, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1328, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1277, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1037, in _send_output
    self.send(msg)
  File "/usr/lib/python3.10/http/client.py", line 975, in send
    self.connect()
  File "/usr/lib/python3/dist-packages/docker/transport/unixconn.py", line 30, in connect
    sock.connect(self.unix_socket)
FileNotFoundError: [Errno 2] No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 439, in send
    resp = conn.urlopen(
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 755, in urlopen
    retries = retries.increment(
  File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 532, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/lib/python3/dist-packages/six.py", line 718, in reraise
    raise value.with_traceback(tb)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 699, in urlopen
    httplib_response = self._make_request(
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 394, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/lib/python3.10/http/client.py", line 1282, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1328, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1277, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/lib/python3.10/http/client.py", line 1037, in _send_output
    self.send(msg)
  File "/usr/lib/python3.10/http/client.py", line 975, in send
    self.connect()
  File "/usr/lib/python3/dist-packages/docker/transport/unixconn.py", line 30, in connect
    sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 214, in _retrieve_server_version
    return self.version(api_version=False)["ApiVersion"]
  File "/usr/lib/python3/dist-packages/docker/api/daemon.py", line 181, in version
    return self._result(self._get(url), json=True)
  File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 46, in inner
    return f(self, *args, **kwargs)
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 237, in _get
    return self.get(url, **self._set_request_timeout(kwargs))
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 555, in get
    return self.request('GET', url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 542, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 655, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 498, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/docker-compose", line 33, in <module>
    sys.exit(load_entry_point('docker-compose==1.29.2', 'console_scripts', 'docker-compose')())
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 81, in main
    command_func()
  File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 200, in perform_command
    project = project_from_options('.', options)
  File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 60, in project_from_options
    return get_project(
  File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 152, in get_project
    client = get_client(
  File "/usr/lib/python3/dist-packages/compose/cli/docker_client.py", line 41, in get_client
    client = docker_client(
  File "/usr/lib/python3/dist-packages/compose/cli/docker_client.py", line 170, in docker_client
    client = APIClient(use_ssh_client=not use_paramiko_ssh, **kwargs)
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 197, in __init__
    self._version = self._retrieve_server_version()
  File "/usr/lib/python3/dist-packages/docker/api/client.py", line 221, in _retrieve_server_version
    raise DockerException(
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))

こちらも同様

docker-compose -H unix:///run/user/1003/docker.sock ps
         Name                        Command                  State                            Ports                      
--------------------------------------------------------------------------------------------------------------------------
gogs                      /app/gogs/docker/start.sh  ...   Up (healthy)   22/tcp, 0.0.0.0:3000->3000/tcp,:::3000->3000/tcp
gogs_rootless_mariadb_1   docker-entrypoint.sh mariadbd    Up             0.0.0.0:13306->3306/tcp,:::13306->3306/tcp      

.bashrcに書いた

export XDG_RUNTIME_DIR=/home/gogs_docker/.docker/run
が余計だったのかな・・・。

これをコメントアウトして試してみる

これで

export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/docker.sock
docker-compose ps
         Name                        Command                  State                            Ports                      
--------------------------------------------------------------------------------------------------------------------------
gogs                      /app/gogs/docker/start.sh  ...   Up (healthy)   22/tcp, 0.0.0.0:3000->3000/tcp,:::3000->3000/tcp
gogs_rootless_mariadb_1   docker-entrypoint.sh mariadbd    Up             0.0.0.0:13306->3306/tcp,:::13306->3306/tcp      

今度こそ大丈夫そう。

改めてデータの引っ越しをする。

データの引っ越し

  • データベース
  • gogs-repositories
  • config
データベース
mysqldump -u git -p gogs_git > gogs.sql.bak
mysql -u gogs -p gogs --port=13306 < /home/adeno/gogs.sql.bak 
gogs-repositories

data/gogs/data/gogs-repositoriesにコピー
データの所有者を100999(コンテナ内のgit(10000)相当)にしておく

config
[repository]
ROOT = /app/gogs/data/gogs-repositories

ホスト名をgogs_mariadb_1にする

これでOKOK

更新ができない

なぜか、もともとの/home/git/gogs-repositoriesを参照しようとする。
困ったので、最終手段でシンボリックリンクを張った。

gogs_docker@blackcore:/mnt/backuparea/gogs$ docker exec -it gogs /bin/bash

c6a2687906f9:/home/git/gogs-repositories# mkdir -p /home/git/gogs/gogs
6a2687906f9:/home/git/gogs-repositories# cd /home/git/gogs/gogs/
c6a2687906f9:/home/git/gogs/gogs# ln -s /app/gogs/data/gogs-repositories/gogs-repositories gogs-repositories
c6a2687906f9:/home/git/gogs/gogs# ls -l
total 0
lrwxrwxrwx    1 root     root            50 Feb 15 16:18 gogs-repositories -> /app/gogs/data/gogs-repositories/gogs-repositories

2023年1月22日日曜日

ホームサーバーの環境移行(2) (GPE-2500T/RTL8125Bが切断される)

有線LANにする

結局、データの転送に思いの外、時間がかかるので、有線LANに変更することにした。
せっかくなので、2.5Gにしたいと思う。

あまり予算もないので、PlanexのGPE-2500TとFX2G-05EMを選んだ。
RTL8125Bというチップを使っており、Linuxでの実績もありそうだ。


 

有線LANが切れる

メインPCの方は安定して動いているけど、ホームサーバーの方はしばらくすると通信ができない状態になってしまう。
その時のシステムログはこんな感じ。

[  940.182815] ------------[ cut here ]------------
[  940.182826] NETDEV WATCHDOG: enp1s0 (r8169): transmit queue 0 timed out
[  940.182875] WARNING: CPU: 6 PID: 0 at net/sched/sch_generic.c:477 dev_watchdog+0x277/0x280
[  940.182890] Modules linked in: ccm rfcomm cmac algif_hash algif_skcipher af_alg ip6t_REJECT nf_reject_ipv6 xt_hl ip6_tables ip6t_rt ipt_REJECT nf_reject_ipv4 xt_LOG nf_log_syslog xt_multiport nft_limit bnep xt_limit xt_addrtype xt_tcpudp xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nft_counter nf_tables nfnetlink zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) intel_rapl_msr intel_rapl_common snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio snd_hda_codec_hdmi snd_hda_intel snd_intel_dspcfg edac_mce_amd snd_intel_sdw_acpi kvm_amd snd_hda_codec snd_hda_core snd_hwdep kvm snd_pcm iwlmvm snd_seq_midi snd_seq_midi_event btusb nls_iso8859_1 mac80211 rapl input_leds joydev snd_rawmidi btrtl libarc4 btbcm snd_seq btintel bluetooth iwlwifi snd_seq_device wmi_bmof k10temp snd_timer ecdh_generic cfg80211 snd ecc ccp soundcore mac_hid sch_fq_codel nct6775 hwmon_vid msr parport_pc ppdev lp parport ramoops pstore_blk reed_solomon
[  940.183202]  pstore_zone efi_pstore ip_tables x_tables autofs4 btrfs blake2b_generic zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid0 multipath linear dm_mirror dm_region_hash dm_log raid1 amdgpu hid_generic iommu_v2 gpu_sched i2c_algo_bit drm_ttm_helper ttm drm_kms_helper syscopyarea sysfillrect sysimgblt usbhid fb_sys_fops hid crct10dif_pclmul crc32_pclmul ghash_clmulni_intel cec aesni_intel r8169 gpio_amdpt xhci_pci crypto_simd ahci rc_core i2c_piix4 nvme cryptd drm nvme_core libahci xhci_pci_renesas realtek wmi video gpio_generic
[  940.183390] CPU: 6 PID: 0 Comm: swapper/6 Tainted: P           O      5.15.0-58-generic #64-Ubuntu
[  940.183395] Hardware name: To Be Filled By O.E.M. A520M-ITX/ac/A520M-ITX/ac, BIOS P2.20 12/27/2022
[  940.183399] RIP: 0010:dev_watchdog+0x277/0x280
[  940.183405] Code: eb 97 48 8b 5d d0 c6 05 67 17 69 01 01 48 89 df e8 ce 64 f9 ff 44 89 e1 48 89 de 48 c7 c7 50 62 ed b8 48 89 c2 e8 ef d3 19 00 <0f> 0b eb 80 e9 de 3d 23 00 0f 1f 44 00 00 55 48 89 e5 41 57 41 56
[  940.183410] RSP: 0018:ffffa1a0c0314e70 EFLAGS: 00010282
[  940.183417] RAX: 0000000000000000 RBX: ffff8d1ddbd18000 RCX: 0000000000000000
[  940.183421] RDX: ffff8d24de3ac240 RSI: ffff8d24de3a0580 RDI: 0000000000000300
[  940.183425] RBP: ffffa1a0c0314ea8 R08: 0000000000000003 R09: fffffffffffd7cd0
[  940.183429] R10: 0000000000ffff0a R11: 0000000000000001 R12: 0000000000000000
[  940.183433] R13: ffff8d1ddb1f1e80 R14: 0000000000000001 R15: ffff8d1ddbd184c0
[  940.183436] FS:  0000000000000000(0000) GS:ffff8d24de380000(0000) knlGS:0000000000000000
[  940.183441] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  940.183446] CR2: 000055733bccb020 CR3: 0000000110410000 CR4: 0000000000750ee0
[  940.183450] PKRU: 55555554
[  940.183453] Call Trace:
[  940.183457]  <IRQ>
[  940.183462]  ? pfifo_fast_enqueue+0x160/0x160
[  940.183471]  call_timer_fn+0x2c/0x120
[  940.183479]  __run_timers.part.0+0x1e3/0x270
[  940.183485]  ? ktime_get+0x46/0xc0
[  940.183493]  ? native_x2apic_icr_read+0x20/0x20
[  940.183501]  ? lapic_next_event+0x20/0x30
[  940.183508]  ? clockevents_program_event+0xad/0x130
[  940.183517]  run_timer_softirq+0x2a/0x60
[  940.183522]  __do_softirq+0xd9/0x2e7
[  940.183530]  irq_exit_rcu+0x94/0xc0
[  940.183539]  sysvec_apic_timer_interrupt+0x80/0x90
[  940.183547]  </IRQ>
[  940.183549]  <TASK>
[  940.183552]  asm_sysvec_apic_timer_interrupt+0x1b/0x20
[  940.183558] RIP: 0010:native_safe_halt+0xb/0x10
[  940.183566] Code: 2c ff 5b 41 5c 41 5d 5d c3 cc cc cc cc 4c 89 ee 48 c7 c7 80 43 65 b9 e8 23 91 8d ff eb ca cc eb 07 0f 00 2d d9 e0 45 00 fb f4 <c3> cc cc cc cc eb 07 0f 00 2d c9 e0 45 00 f4 c3 cc cc cc cc cc 0f
[  940.183570] RSP: 0018:ffffa1a0c010be78 EFLAGS: 00000202
[  940.183577] RAX: ffffffffb85afc40 RBX: ffff8d1dc0373280 RCX: 7fffff251d8fda07
[  940.183582] RDX: 00000000000235a1 RSI: 0000000000000006 RDI: 00000000000235a2
[  940.183586] RBP: ffffa1a0c010be80 R08: 000000cd42eda501 R09: 0000000000000000
[  940.183590] R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000000
[  940.183593] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[  940.183598]  ? __cpuidle_text_start+0x8/0x8
[  940.183606]  ? default_idle+0xe/0x20
[  940.183611]  arch_cpu_idle+0x15/0x20
[  940.183620]  default_idle_call+0x3e/0xd0
[  940.183625]  cpuidle_idle_call+0x179/0x1e0
[  940.183633]  do_idle+0x83/0xf0
[  940.183640]  cpu_startup_entry+0x20/0x30
[  940.183644]  start_secondary+0x12a/0x180
[  940.183649]  secondary_startup_64_no_verify+0xc2/0xcb
[  940.183658]  </TASK>
[  940.183661] ---[ end trace 32949fbdb853d046 ]---
[ 1106.094315] r8169 0000:01:00.0 enp1s0: rtl_chipcmd_cond == 1 (loop: 100, delay: 100).
[ 1106.095552] r8169 0000:01:00.0 enp1s0: rtl_ephyar_cond == 1 (loop: 100, delay: 10).
[ 1106.096672] r8169 0000:01:00.0 enp1s0: rtl_ephyar_cond == 1 (loop: 100, delay: 10).
[ 1106.097791] r8169 0000:01:00.0 enp1s0: rtl_ephyar_cond == 1 (loop: 100, delay: 10).
[ 1106.098915] r8169 0000:01:00.0 enp1s0: rtl_ephyar_cond == 1 (loop: 100, delay: 10).
[ 1106.100034] r8169 0000:01:00.0 enp1s0: rtl_ephyar_cond == 1 (loop: 100, delay: 10).
[ 1106.101153] r8169 0000:01:00.0 enp1s0: rtl_ephyar_cond == 1 (loop: 100, delay: 10).
[ 1106.121214] r8169 0000:01:00.0 enp1s0: rtl_mac_ocp_e00e_cond == 1 (loop: 10, delay: 1000).

以降、[rtl_chipcmd_cond][rtl_ephyar_cond][rtl_mac_ocp_e00e_cond]の繰り返し。

  • インターフェースのLANコネクタのACTは点滅を繰り返している
  • 再起動(reboot)では復旧しない
  • シャットダウンで復旧する
  • BIOSは最新
  • Xログインしていても、していなくても発生する
  • ping打ち続けても発生する

メインPCとの比較

項目 メインPC サーバー
uname -a 5.15.0-57-generic #63-Ubuntu 5.15.0-58-generic #64-Ubuntu
lsb_release Linux Mint 21 (vanessa) Linux Mint 21.1 (vera)
ドライバ r8169 ※ r8169 ※

※マザーボードに搭載されているLANと同じドライバを使用しているみたい。

[    0.896998] r8169 0000:01:00.0 eth0: RTL8125B, **:**:**:**:**:**, XID 641, IRQ 39
[    0.897002] r8169 0000:01:00.0 eth0: jumbo features [frames: 9194 bytes, tx checksumming: ko]

[    0.912515] r8169 0000:04:00.0 eth1: RTL8168h/8111h, **:**:**:**:**:**, XID 541, IRQ 48
[    0.912518] r8169 0000:04:00.0 eth1: jumbo features [frames: 9194 bytes, tx checksumming: ko]|

OSバージョンの違いがあるくらいか・・・。

ドライバを最新にしてみる

https://www.realtek.com/ja/component/zoo/category/network-interface-controllers-10-100-1000m-gigabit-ethernet-pci-express-software

r8169からr8125に変わった。

[    0.882838] r8125: loading out-of-tree module taints kernel.
[    0.882928] r8125: module verification failed: signature and/or required key missing - tainting kernel
[    0.883378] r8125 2.5Gigabit Ethernet driver 9.011.00-NAPI loaded
[    0.902733] r8125: This product is covered by one or more of the following patents: US6,570,884, US6,115,776, and US6,327,625.
[    0.904748] r8125  Copyright (C) 2022 Realtek NIC software team <nicfae@realtek.com> 
[    2.653915] r8125 0000:01:00.0 enp1s0: renamed from eth0
[   10.077752] r8125: enp1s0: link up

しばらく様子見
使っていたら

enp1s0: cmd = 0xff, should be 0x07

となって通信断してしまった。
たぶんapt upgradeで大きの通信をしたタイミングのようだ。

Linux Mint 21 (vanessa)

メインPCと同じ Mint 21をライブUSBにて起動して試してみる。
スピードテストやapt upgradeでは切れないようだ。
1時間様子見でも切れない。

Linux Mint 21.1 (vera)

この状態でまた21.1を起動してみる
スピードテストやapt upgradeでは切れないようだ。
1時間様子見でも切れない。

iperf(ライブUSB)

サーバー <ー メインPC
adeno@drakorange:~$ iperf -c 192.168.1.34 -t 30 
------------------------------------------------------------
Client connecting to 192.168.1.34, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.29 port 53526 connected with 192.168.1.34 port 5001
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-30.0242 sec  8.22 GBytes  2.35 Gbits/sec
サーバー ー> メインPC
iperf -c 192.168.1.34 -R -t 30 
------------------------------------------------------------
Client connecting to 192.168.1.34, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.29 port 34464 connected with 192.168.1.34 port 5001 (reverse)
[ ID] Interval       Transfer     Bandwidth
[ *1] 0.0000-45.7757 sec   108 MBytes  19.8 Mbits/sec

サーバーPCがリブートしたOrz

iperf(通常起動)

サーバー <ー メインPC
adeno@drakorange:~$ iperf -c 192.168.1.34 -t 30 
------------------------------------------------------------
Client connecting to 192.168.1.34, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.29 port 32992 connected with 192.168.1.34 port 5001
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-30.0178 sec  8.22 GBytes  2.35 Gbits/sec

サーバー ー> メインPC

adeno@drakorange:~$ iperf -c 192.168.1.34 -R -t 30 
------------------------------------------------------------
Client connecting to 192.168.1.34, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] local 192.168.1.29 port 51360 connected with 192.168.1.34 port 5001 (reverse)
[ ID] Interval       Transfer     Bandwidth
[ *1] 0.0000-30.0122 sec  8.22 GBytes  2.35 Gbits/sec

が、上記10回くらい繰り返したり、パラメータを変えたりしていると

iperf -c 192.168.1.34 -R -t 60 -b 200M
enp1s0: cmd = 0xff, should be 0x07

が発生した。
メインPCの方は大丈夫。

Linux Mint 21 (vanessa)

再度、21で試してみる

iperf -c 192.168.1.34 -t 30
iperf -c 192.168.1.34 -R -t 30 

を10セット

iperf -c 192.168.1.34 -R -t 60 -b 200M

大丈夫だ。
いやだめだ。

NETDEV WATCHDOG: enp1s0 (r8169): transmit queue 0 timed out

でもその後も通信できている
いやだめだ。

もしかして、個体差?

個体を入れ替えてみる

メインPCとサーバーのGPE-2500Tを入れ替えてみた
するとすぐに切断された状態になってしまった。
もしかして、個体差?

まとめると

今までの検証内容をまとめると

ハード GPE-2500T OS ドライバ 長期ping スピードテスト apt upgrade iperf 総合
メインPC A Linux Mint 21 (vanessa) r8169
サーバー B Linux Mint 21.1 (vera) r8169 × ×
サーバー B Linux Mint 21.1 (vera) r8125 × ×
サーバー B Linux Mint 21.1 (vera) r8169 × ×
サーバー B Linux Mint 21 (vanessa) r8169 ×
メインPC B Linux Mint 21 (vanessa) r8169 ? ? × × ×
サーバー A Linux Mint 21 (vanessa) r8169

交換

仕方がないのでもう1つGPE-2500Tを入手した。
これで収束すると良いのだけど。
あと、サポセンに連絡したら返品とか交換とかしてもらえるのかな・・・。

sambaの転送速度

晴れて、有線LANになったことで、転送速度は
14MB/s(112Mbps)→190MB/s(1,520Mbps)となった。
HDDでソフトウェアRAIDということもあり、ワイヤースピートに迫ることはなかったけど、満足感のある結果となった。