Changes for page Proxmox Backup server

Last modified by Kevin Wiki on 2024/05/21 21:23

From version 5.1
edited by Kevin Wiki
on 2024/04/06 10:11
Change comment: There is no comment for this version
To version 19.1
edited by Kevin Wiki
on 2024/04/06 12:51
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -2,118 +2,200 @@
2 2  (((
3 3  (% class="col-xs-12 col-sm-8" %)
4 4  (((
5 -= Configuration =
5 += Backup Server configuration =
6 6  
7 -Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences:
8 -\\{{code language="none" width="100%"}}Selection mode: include selected VMs
9 -Send email to: kevin.midboe+{PVE_HOSTNAME}@gmail.com
10 -Email: On failure only
11 -Mode: Snapshot
12 -Enabled: True
13 -Job Comment: {{guestname}}, {{node}}, {{vmid}}{{/code}}
7 +Backup server is setup with:
14 14  
15 -= {{code language="yaml"}}apiVersion: apps/v1
16 -kind: DaemonSet
17 -metadata:
18 - name: prometheus-node-exporter-daemonset
19 - namespace: monitoring
20 - labels:
21 - app: prometheus-node-exporter
22 -spec:
23 - selector:
24 - matchLabels:
25 - app: prometheus-node-exporter
26 - template:
27 - metadata:
28 - labels:
29 - app: prometheus-node-exporter
30 - spec:
31 - containers:
32 - - args:
33 - - --path.procfs=/host/proc
34 - - --path.sysfs=/host/sys
35 - - --path.rootfs=/host/root
36 - - --web.listen-address=:9100
37 - image: quay.io/prometheus/node-exporter:latest
38 - imagePullPolicy: IfNotPresent
39 - name: prometheus-node-exporter
40 - ports:
41 - - name: metrics
42 - containerPort: 9100
43 - hostPort: 9100
44 - securityContext:
45 - allowPrivilegeEscalation: false
46 - volumeMounts:
47 - - mountPath: /host/proc
48 - name: proc
49 - readOnly: true
50 - - mountPath: /host/sys
51 - name: sys
52 - readOnly: true
53 - - mountPath: /host/root
54 - mountPropagation: HostToContainer
55 - name: root
56 - readOnly: true
57 - hostNetwork: true
58 - hostPID: true
59 - restartPolicy: Always
60 - tolerations:
61 - - key: "node-role.kubernetes.io/master"
62 - effect: "NoSchedule"
63 - volumes:
64 - - hostPath:
65 - path: /proc
66 - type: ""
67 - name: proc
68 - - hostPath:
69 - path: /sys
70 - type: ""
71 - name: sys
72 - - hostPath:
73 - path: /
74 - type: ""
75 - name: root
76 - updateStrategy:
77 - rollingUpdate:
78 - maxSurge: 0
79 - maxUnavailable: 1
80 - type: RollingUpdate
81 -{{/code}} =
9 +* zfs storage
10 +* access control - api tokens
11 +* datastore
12 +** sync jobs
13 +** prune jobs
14 +** verify jobs
15 +** permissions
16 +* timings and simulator
82 82  
83 -= Methodology =
18 +== ZFS storage array ==
84 84  
85 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
20 +There are currently 2 x 8TB WD drives. Current pool status:
86 86  
87 -== Sub-paragraph ==
22 +(((
23 +{{code language="none"}}
24 +kevin@clio:~$ sudo zpool status pergamum
25 +pool: pergamum
26 +state: ONLINE
27 +  scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
28 +config:
29 +        NAME                                            STATE     READ WRITE CKSUM
30 +        pergamum                                        ONLINE       0     0     0
31 +          raidz1-0                                      ONLINE       0     0     0
32 +            scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1  ONLINE       0     0     0
33 +            sdc1                                        ONLINE       0     0     0
34 +errors: No known data errors
35 +{{/code}}
36 +)))
88 88  
89 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
90 90  
91 -= Proxmox backup server =
39 +=== Creating and expanding zfs pool ===
92 92  
93 -= =
41 +(((
42 +{{code language="none"}}
43 +zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f
44 +zfs set mountpoint=/mnt/pergamum pergamum
45 +(zpool import -c /etc/zfs/zpool.cache -aN)
46 +zpool export pergamum
47 +{{/code}}
48 +)))
94 94  
95 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
96 96  
97 -== Sub-paragraph ==
51 +(((
52 +have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool:
98 98  
99 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
54 +{{code language="none"}}
55 +zpool add -n pergamum raidz DISK1 DISK2
56 +{{/code}}
100 100  
101 -== Sub-paragraph ==
102 102  
103 -Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
59 +~> NOTE! `-n` is dry run, remove to commit.
104 104  )))
105 105  
106 106  
63 +== Access Control ==
64 +
65 +Each client host that wants to backup their contents to the backup server should have their unique API token for authentication.
66 +
67 +API Token:
68 +
69 +* user: [[root@pam>>mailto:root@pam]]
70 +* token name: CLIENT_NAME
71 +* expire: never
72 +* enabled: true
73 +
74 +Permissions - Add a API Token Permission:
75 +
76 +* path: /datastore/proxmox-backup/CLIENT_NAME
77 +* api token: root@pam!CLIENT_NAME
78 +* role: DatastoreBackup
79 +* propagate: true
80 +
81 +>Note! The path will not be define until after the Datastore namespace is define in the steps below
82 +
83 +== Proxmox datastore ==
84 +
85 +If none exists create the datastore. Ours point is named `proxmox-backup` and points to ZFS storage mounted at `/mnt/pergamum`. All references to `proxmox-backup` referes to what you named it as in the create step here.
86 +
87 +=== Namespace ===
88 +
89 +Namespaces is what we will use in a datastore to separate permissions to each host. It's important to create these for the API tokens create in Access Control section above.
90 +
91 +=== Prune & Garbage collect ===
92 +
93 +We don't require backups for every day of the year. Pruning lets you systematically delete older backups, retaining backups for the last given number of time intervals. There exists a fantastic simulator that can be used to experiment with different backup schedules and prune options: [[https:~~/~~/pbs.proxmox.com/docs/prune-simulator/>>https://pbs.proxmox.com/docs/prune-simulator/]]. The current configuration is:
94 +
95 +* datastore: proxmox-backup
96 +* namespace: root
97 +* keep last: 4
98 +* keep: hourly: -
99 +* keep daily: 6
100 +* keep weekly: 3
101 +* keep monthly: 6
102 +* keep yearly: 4
103 +* max_depth: full
104 +* prune schedule: 0/6:00
105 +* enabled: true
106 +
107 +=== Verify jobs ===
108 +
109 +Current configuration is:
110 +
111 +* local datastore: proxmox-backup
112 +* namespace: root
113 +* max-depth: full
114 +* schedule: daily
115 +* skip verified: true
116 +* re-verify after: 30 days
117 +
118 +=== Permissions ===
119 +
120 +Permissions are explained in the Access Control section above, but it can be easier to configure permissions from the datastore. Navigate to the datastore Permission tab and add API Token Permission:
121 +
122 +* path: /datastore/proxmox-backup/CLIENT_NAME
123 +* API Token: root@pam!CLIENT_NAME
124 +* Role: DatastoreBackup
125 +* Propagate: true
126 +
127 += Tailscale =
128 +
129 +Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server.
130 +
131 +Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`.
132 +To view the status: `sudo tailscale status`.
133 +
134 += Jottacloud client =
135 +
136 +Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage.
137 +NOTE! This setup still uses user `kevin` and not the correct jottad user.
138 +
139 +(((
140 +{{code language="none"}}
141 +# install jotta-cli
142 +sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg
143 +echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list
144 +sudo apt-get update
145 +sudo apt-get install jotta-cli
146 +
147 +# configure runtime environment
148 +sudo useradd -m jottad
149 +sudo usermod -a -G jottad backup
150 +{{/code}}
151 +)))
152 +
153 +Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with :
154 +
155 +(((
156 +
157 +
158 +{{code language="ini" layout="LINENUMBERS" title="/usr/lib/systemd/user/jottad.service"}}
159 +[Unit]
160 +Description=Jotta client daemon
161 +
162 +[Service]
163 +Type=notify
164 +# Group=backup
165 +# UMask=0002
166 +
167 +# EnvironmentFile=-%h/.config/jotta-cli/jotta-cli.env
168 +ExecStart=/usr/bin/jottad stdoutlog datadir %h/.jottad/
169 +Restart=on-failure
170 +
171 +[Install]
172 +WantedBy=default.target
173 +{{/code}}
174 +)))
175 +
176 +== Flaws ==
177 +
178 +Since proxmox backup server uses chunks for deduplicating data a complete file list is required. This makes it impossible to download a single file representing a VM or LXC, all files must be downloaded and imported into proxmox backup server for reconstruction.
179 +
180 +It also seems like there are a LOT of files shifting - being added and deleted. Making the diff uploaded to jottacloud huge.
181 +
182 += Client Configuration =
183 +
184 +Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences:
185 +
186 +* selection mode: include selected VMs
187 +* send email to: [[[email protected]>>mailto:[email protected]]]
188 +* email: on failure only
189 +* mode: snapshot
190 +* enabled: true
191 +* job comment: ~{~{guestname}}, ~{~{node}}, ~{~{vmid}}
192 +)))
193 +
194 +
107 107  (% class="col-xs-12 col-sm-4" %)
108 108  (((
109 109  {{box title="**Contents**"}}
110 110  {{toc/}}
111 111  {{/box}}
112 -
113 -[[image:[email protected]]]
114 -//Figure 1: [[Sea>>https://commons.wikimedia.org/wiki/File:Isle_of_Icacos_II.jpg]]//
115 -
116 -[[image:[email protected]]]
117 -//Figure 2: [[Waves>>https://commons.wikimedia.org/wiki/File:Culebra_-_Playa_de_Flamenco.jpg]]//
118 118  )))
119 119  )))