Changes for page Proxmox Backup server

Last modified by Kevin Wiki on 2024/05/21 21:23

From version 25.1
edited by Kevin Wiki
on 2024/04/06 14:25
Change comment: There is no comment for this version
To version 3.1
edited by Kevin Wiki
on 2024/02/17 20:53
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -2,229 +2,40 @@
2 2  (((
3 3  (% class="col-xs-12 col-sm-8" %)
4 4  (((
5 -(% class="wikigeneratedid" %)
6 -Following provides setup steps, configuration explanation and application instructions for backup server. This box both generates backups and syncs them to remote locations. View general backup explanation page [[Server backup>>doc:infra.Backup.WebHome]] for high-level information.
5 += Methodology =
7 7  
8 -(% class="wikigeneratedid" %)
9 -Web GUI: [[https:~~/~~/clio.schleppe:8007/#pbsDashboard>>url:https://clio.schleppe:8007/#pbsDashboard]]
7 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
10 10  
11 -= Backup Server configuration =
9 +== Sub-paragraph ==
12 12  
13 -Backup server is setup with:
11 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
14 14  
15 -* zfs storage
16 -* access control - api tokens
17 -* datastore
18 -** sync jobs
19 -** prune jobs
20 -** verify jobs
21 -** permissions
22 -* timings and simulator
13 += Proxmox backup server =
23 23  
24 -== ZFS storage array ==
15 += =
25 25  
26 -There are currently 2 x 8TB WD drives. Current pool status:
17 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
27 27  
28 -(((
29 -{{code language="none"}}
30 -kevin@clio:~$ sudo zpool status pergamum
31 -pool: pergamum
32 -state: ONLINE
33 -  scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
34 -config:
35 -        NAME                                            STATE     READ WRITE CKSUM
36 -        pergamum                                        ONLINE       0     0     0
37 -          raidz1-0                                      ONLINE       0     0     0
38 -            scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1  ONLINE       0     0     0
39 -            sdc1                                        ONLINE       0     0     0
40 -errors: No known data errors
41 -{{/code}}
42 -)))
19 +== Sub-paragraph ==
43 43  
21 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
44 44  
45 -=== Creating and expanding zfs pool ===
23 +== Sub-paragraph ==
46 46  
47 -(((
48 -{{code language="none"}}
49 -zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f
50 -zfs set mountpoint=/mnt/pergamum pergamum
51 -(zpool import -c /etc/zfs/zpool.cache -aN)
52 -zpool export pergamum
53 -{{/code}}
25 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
54 54  )))
55 55  
56 56  
57 -(((
58 -have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool:
59 -
60 -{{code language="none"}}
61 -zpool add -n pergamum raidz DISK1 DISK2
62 -{{/code}}
63 -
64 -
65 -~> NOTE! `-n` is dry run, remove to commit.
66 -)))
67 -
68 -
69 -== Access Control ==
70 -
71 -Each client host that wants to backup their contents to the backup server should have their unique API token for authentication.
72 -
73 -API Token:
74 -
75 -* user: [[root@pam>>mailto:root@pam]]
76 -* token name: CLIENT_NAME
77 -* expire: never
78 -* enabled: true
79 -
80 -Permissions - Add a API Token Permission:
81 -
82 -* path: /datastore/proxmox-backup/CLIENT_NAME
83 -* api token: root@pam!CLIENT_NAME
84 -* role: DatastoreBackup
85 -* propagate: true
86 -
87 ->Note! The path will not be define until after the Datastore namespace is define in the steps below
88 -
89 -== Proxmox datastore ==
90 -
91 -If none exists create the datastore. Ours point is named `proxmox-backup` and points to ZFS storage mounted at `/mnt/pergamum`. All references to `proxmox-backup` referes to what you named it as in the create step here.
92 -
93 -=== Namespace ===
94 -
95 -Namespaces is what we will use in a datastore to separate permissions to each host. It's important to create these for the API tokens create in Access Control section above.
96 -
97 -=== Prune & Garbage collect ===
98 -
99 -We don't require backups for every day of the year. Pruning lets you systematically delete older backups, retaining backups for the last given number of time intervals. There exists a fantastic simulator that can be used to experiment with different backup schedules and prune options: [[https:~~/~~/pbs.proxmox.com/docs/prune-simulator/>>https://pbs.proxmox.com/docs/prune-simulator/]]. The current configuration is:
100 -
101 -* datastore: proxmox-backup
102 -* namespace: root
103 -* keep last: 4
104 -* keep: hourly: -
105 -* keep daily: 6
106 -* keep weekly: 3
107 -* keep monthly: 6
108 -* keep yearly: 4
109 -* max_depth: full
110 -* prune schedule: 0/6:00
111 -* enabled: true
112 -
113 -=== Verify jobs ===
114 -
115 -Current configuration is:
116 -
117 -* local datastore: proxmox-backup
118 -* namespace: root
119 -* max-depth: full
120 -* schedule: daily
121 -* skip verified: true
122 -* re-verify after: 30 days
123 -
124 -=== Permissions ===
125 -
126 -Permissions are explained in the Access Control section above, but it can be easier to configure permissions from the datastore. Navigate to the datastore Permission tab and add API Token Permission:
127 -
128 -* path: /datastore/proxmox-backup/CLIENT_NAME
129 -* API Token: root@pam!CLIENT_NAME
130 -* Role: DatastoreBackup
131 -* Propagate: true
132 -
133 -= Tailscale =
134 -
135 -Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server.
136 -
137 -{{code language="bash"}}
138 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
139 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list
140 -
141 -sudo apt-get update
142 -sudo apt-get install tailscale
143 -
144 -systemctl status tailscaled.service
145 -sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud
146 -tailscale status
147 -{{/code}}
148 -
149 -Connect to headscale login server:
150 -
151 -{{code language="none"}}
152 -$ sudo tailscale up --login-server https://SUBDOMAIN.schleppe.cloud
153 -
154 -To authenticate, visit:
155 -
156 - https://SUBDOMAIN.schleppe.cloud/register/nodekey:fe30125f6dc09b2ac387a3b06c3ebc2678f031d07bd87bb76d91cd1890226c9f
157 -
158 -Success.
159 -{{/code}}
160 -
161 -View more info in the docs: [[https:~~/~~/earvingad.github.io/posts/headscale/>>https://earvingad.github.io/posts/headscale/]]
162 -
163 -= Jottacloud client =
164 -
165 -Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage.
166 -NOTE! This setup still uses user `kevin` and not the correct jottad user.
167 -
168 -(((
169 -{{code language="none"}}
170 -# install jotta-cli
171 -sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg
172 -echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list
173 -sudo apt-get update
174 -sudo apt-get install jotta-cli
175 -
176 -# configure runtime environment
177 -sudo useradd -m jottad
178 -sudo usermod -a -G jottad backup
179 -{{/code}}
180 -)))
181 -
182 -Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with :
183 -
184 -(((
185 -
186 -
187 -{{code language="ini" layout="LINENUMBERS" title="/usr/lib/systemd/user/jottad.service"}}
188 -[Unit]
189 -Description=Jotta client daemon
190 -
191 -[Service]
192 -Type=notify
193 -# Group=backup
194 -# UMask=0002
195 -
196 -# EnvironmentFile=-%h/.config/jotta-cli/jotta-cli.env
197 -ExecStart=/usr/bin/jottad stdoutlog datadir %h/.jottad/
198 -Restart=on-failure
199 -
200 -[Install]
201 -WantedBy=default.target
202 -{{/code}}
203 -)))
204 -
205 -== Flaws ==
206 -
207 -Since proxmox backup server uses chunks for deduplicating data a complete file list is required. This makes it impossible to download a single file representing a VM or LXC, all files must be downloaded and imported into proxmox backup server for reconstruction.
208 -
209 -It also seems like there are a LOT of files shifting - being added and deleted. Making the diff uploaded to jottacloud huge.
210 -
211 -= Client Configuration =
212 -
213 -Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences:
214 -
215 -* selection mode: include selected VMs
216 -* send email to: [[[email protected]>>mailto:[email protected]]]
217 -* email: on failure only
218 -* mode: snapshot
219 -* enabled: true
220 -* job comment: ~{~{guestname}}, ~{~{node}}, ~{~{vmid}}
221 -)))
222 -
223 -
224 224  (% class="col-xs-12 col-sm-4" %)
225 225  (((
226 226  {{box title="**Contents**"}}
227 227  {{toc/}}
228 228  {{/box}}
34 +
35 +[[image:[email protected]]]
36 +//Figure 1: [[Sea>>https://commons.wikimedia.org/wiki/File:Isle_of_Icacos_II.jpg]]//
37 +
38 +[[image:[email protected]]]
39 +//Figure 2: [[Waves>>https://commons.wikimedia.org/wiki/File:Culebra_-_Playa_de_Flamenco.jpg]]//
229 229  )))
230 230  )))