Wiki source code of Proxmox Backup server

Version 21.1 by Kevin Wiki on 2024/04/06 14:14

Show last authors
1 (% class="row" %)
2 (((
3 (% class="col-xs-12 col-sm-8" %)
4 (((
5 = Backup Server configuration =
6
7 Backup server is setup with:
8
9 * zfs storage
10 * access control - api tokens
11 * datastore
12 ** sync jobs
13 ** prune jobs
14 ** verify jobs
15 ** permissions
16 * timings and simulator
17
18 == ZFS storage array ==
19
20 There are currently 2 x 8TB WD drives. Current pool status:
21
22 (((
23 {{code language="none"}}
24 kevin@clio:~$ sudo zpool status pergamum
25 pool: pergamum
26 state: ONLINE
27   scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
28 config:
29         NAME                                            STATE     READ WRITE CKSUM
30         pergamum                                        ONLINE       0     0     0
31           raidz1-0                                      ONLINE       0     0     0
32             scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1  ONLINE       0     0     0
33             sdc1                                        ONLINE       0     0     0
34 errors: No known data errors
35 {{/code}}
36 )))
37
38
39 === Creating and expanding zfs pool ===
40
41 (((
42 {{code language="none"}}
43 zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f
44 zfs set mountpoint=/mnt/pergamum pergamum
45 (zpool import -c /etc/zfs/zpool.cache -aN)
46 zpool export pergamum
47 {{/code}}
48 )))
49
50
51 (((
52 have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool:
53
54 {{code language="none"}}
55 zpool add -n pergamum raidz DISK1 DISK2
56 {{/code}}
57
58
59 ~> NOTE! `-n` is dry run, remove to commit.
60 )))
61
62
63 == Access Control ==
64
65 Each client host that wants to backup their contents to the backup server should have their unique API token for authentication.
66
67 API Token:
68
69 * user: [[root@pam>>mailto:root@pam]]
70 * token name: CLIENT_NAME
71 * expire: never
72 * enabled: true
73
74 Permissions - Add a API Token Permission:
75
76 * path: /datastore/proxmox-backup/CLIENT_NAME
77 * api token: root@pam!CLIENT_NAME
78 * role: DatastoreBackup
79 * propagate: true
80
81 >Note! The path will not be define until after the Datastore namespace is define in the steps below
82
83 == Proxmox datastore ==
84
85 If none exists create the datastore. Ours point is named `proxmox-backup` and points to ZFS storage mounted at `/mnt/pergamum`. All references to `proxmox-backup` referes to what you named it as in the create step here.
86
87 === Namespace ===
88
89 Namespaces is what we will use in a datastore to separate permissions to each host. It's important to create these for the API tokens create in Access Control section above.
90
91 === Prune & Garbage collect ===
92
93 We don't require backups for every day of the year. Pruning lets you systematically delete older backups, retaining backups for the last given number of time intervals. There exists a fantastic simulator that can be used to experiment with different backup schedules and prune options: [[https:~~/~~/pbs.proxmox.com/docs/prune-simulator/>>https://pbs.proxmox.com/docs/prune-simulator/]]. The current configuration is:
94
95 * datastore: proxmox-backup
96 * namespace: root
97 * keep last: 4
98 * keep: hourly: -
99 * keep daily: 6
100 * keep weekly: 3
101 * keep monthly: 6
102 * keep yearly: 4
103 * max_depth: full
104 * prune schedule: 0/6:00
105 * enabled: true
106
107 === Verify jobs ===
108
109 Current configuration is:
110
111 * local datastore: proxmox-backup
112 * namespace: root
113 * max-depth: full
114 * schedule: daily
115 * skip verified: true
116 * re-verify after: 30 days
117
118 === Permissions ===
119
120 Permissions are explained in the Access Control section above, but it can be easier to configure permissions from the datastore. Navigate to the datastore Permission tab and add API Token Permission:
121
122 * path: /datastore/proxmox-backup/CLIENT_NAME
123 * API Token: root@pam!CLIENT_NAME
124 * Role: DatastoreBackup
125 * Propagate: true
126
127 = Tailscale =
128
129 Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server.
130
131 {{code language="bash"}}
132 curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
133 curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list
134
135 sudo apt-get update
136 sudo apt-get install tailscale
137
138 systemctl status tailscaled.service
139 sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud
140 tailscale status
141 {{/code}}
142
143 Connect to headscale login server:
144
145 {{code language="none"}}
146 $ sudo tailscale up --login-server https://SUBDOMAIN.schleppe.cloud
147
148 To authenticate, visit:
149
150 https://SUBDOMAIN.schleppe.cloud/register/nodekey:fe30125f6dc09b2ac387a3b06c3ebc2678f031d07bd87bb76d91cd1890226c9f
151
152 Success.
153 {{/code}}
154
155 View more info in the docs: [[https:~~/~~/earvingad.github.io/posts/headscale/>>https://earvingad.github.io/posts/headscale/]]
156
157 = Jottacloud client =
158
159 Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage.
160 NOTE! This setup still uses user `kevin` and not the correct jottad user.
161
162 (((
163 {{code language="none"}}
164 # install jotta-cli
165 sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg
166 echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list
167 sudo apt-get update
168 sudo apt-get install jotta-cli
169
170 # configure runtime environment
171 sudo useradd -m jottad
172 sudo usermod -a -G jottad backup
173 {{/code}}
174 )))
175
176 Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with :
177
178 (((
179
180
181 {{code language="ini" layout="LINENUMBERS" title="/usr/lib/systemd/user/jottad.service"}}
182 [Unit]
183 Description=Jotta client daemon
184
185 [Service]
186 Type=notify
187 # Group=backup
188 # UMask=0002
189
190 # EnvironmentFile=-%h/.config/jotta-cli/jotta-cli.env
191 ExecStart=/usr/bin/jottad stdoutlog datadir %h/.jottad/
192 Restart=on-failure
193
194 [Install]
195 WantedBy=default.target
196 {{/code}}
197 )))
198
199 == Flaws ==
200
201 Since proxmox backup server uses chunks for deduplicating data a complete file list is required. This makes it impossible to download a single file representing a VM or LXC, all files must be downloaded and imported into proxmox backup server for reconstruction.
202
203 It also seems like there are a LOT of files shifting - being added and deleted. Making the diff uploaded to jottacloud huge.
204
205 = Client Configuration =
206
207 Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences:
208
209 * selection mode: include selected VMs
210 * send email to: [[[email protected]>>mailto:[email protected]]]
211 * email: on failure only
212 * mode: snapshot
213 * enabled: true
214 * job comment: ~{~{guestname}}, ~{~{node}}, ~{~{vmid}}
215 )))
216
217
218 (% class="col-xs-12 col-sm-4" %)
219 (((
220 {{box title="**Contents**"}}
221 {{toc/}}
222 {{/box}}
223 )))
224 )))