Changes for page Proxmox Backup server

Last modified by Kevin Wiki on 2024/05/21 21:23

From version 22.1
edited by Kevin Wiki
on 2024/04/06 14:16
Change comment: There is no comment for this version
To version 11.1
edited by Kevin Wiki
on 2024/04/06 10:51
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -2,11 +2,8 @@
2 2  (((
3 3  (% class="col-xs-12 col-sm-8" %)
4 4  (((
5 -(% class="wikigeneratedid" %)
6 -Backsup are primarly done through proxmox backup server taking snapshot of running lxc and vm's. These are stored on mirrored ZFS array and synchronized to both off-site location and cloud storage provider.
5 += Configuration =
7 7  
8 -= Backup Server configuration =
9 -
10 10  Backup server is setup with:
11 11  
12 12  * zfs storage
... ... @@ -18,206 +18,82 @@
18 18  ** permissions
19 19  * timings and simulator
20 20  
21 -== ZFS storage array ==
18 += ZFS storage array =
22 22  
23 23  There are currently 2 x 8TB WD drives. Current pool status:
24 24  
25 -(((
26 -{{code language="none"}}
27 -kevin@clio:~$ sudo zpool status pergamum
28 -pool: pergamum
29 -state: ONLINE
30 -  scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
22 +{{{kevin@clio:~$ sudo zpool status pergamum
23 + pool: pergamum
24 + state: ONLINE
25 + scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
31 31  config:
32 -        NAME                                            STATE     READ WRITE CKSUM
33 -        pergamum                                        ONLINE       0     0     0
34 -          raidz1-0                                      ONLINE       0     0     0
35 -            scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1  ONLINE       0     0     0
36 -            sdc1                                        ONLINE       0     0     0
37 -errors: No known data errors
38 -{{/code}}
39 -)))
40 40  
28 + NAME STATE READ WRITE CKSUM
29 + pergamum ONLINE 0 0 0
30 + raidz1-0 ONLINE 0 0 0
31 + scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0
32 + sdc1 ONLINE 0 0 0
41 41  
42 -=== Creating and expanding zfs pool ===
34 +errors: No known data errors}}}
43 43  
44 -(((
45 -{{code language="none"}}
36 +
37 +== Creating and expanding zfs pool ==
38 +
39 +```
46 46  zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f
47 47  zfs set mountpoint=/mnt/pergamum pergamum
48 48  (zpool import -c /etc/zfs/zpool.cache -aN)
49 49  zpool export pergamum
50 -{{/code}}
51 -)))
44 +```
52 52  
53 53  
54 -(((
55 55  have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool:
56 -
57 -{{code language="none"}}
48 +```
58 58  zpool add -n pergamum raidz DISK1 DISK2
59 -{{/code}}
60 -
61 -
50 +```
62 62  ~> NOTE! `-n` is dry run, remove to commit.
63 -)))
64 64  
65 -
66 -== Access Control ==
67 -
68 -Each client host that wants to backup their contents to the backup server should have their unique API token for authentication.
69 -
70 -API Token:
71 -
72 -* user: [[root@pam>>mailto:root@pam]]
73 -* token name: CLIENT_NAME
74 -* expire: never
75 -* enabled: true
76 -
77 -Permissions - Add a API Token Permission:
78 -
79 -* path: /datastore/proxmox-backup/CLIENT_NAME
80 -* api token: root@pam!CLIENT_NAME
81 -* role: DatastoreBackup
82 -* propagate: true
83 -
84 ->Note! The path will not be define until after the Datastore namespace is define in the steps below
85 -
86 -== Proxmox datastore ==
87 -
88 -If none exists create the datastore. Ours point is named `proxmox-backup` and points to ZFS storage mounted at `/mnt/pergamum`. All references to `proxmox-backup` referes to what you named it as in the create step here.
89 -
90 -=== Namespace ===
91 -
92 -Namespaces is what we will use in a datastore to separate permissions to each host. It's important to create these for the API tokens create in Access Control section above.
93 -
94 -=== Prune & Garbage collect ===
95 -
96 -We don't require backups for every day of the year. Pruning lets you systematically delete older backups, retaining backups for the last given number of time intervals. There exists a fantastic simulator that can be used to experiment with different backup schedules and prune options: [[https:~~/~~/pbs.proxmox.com/docs/prune-simulator/>>https://pbs.proxmox.com/docs/prune-simulator/]]. The current configuration is:
97 -
98 -* datastore: proxmox-backup
99 -* namespace: root
100 -* keep last: 4
101 -* keep: hourly: -
102 -* keep daily: 6
103 -* keep weekly: 3
104 -* keep monthly: 6
105 -* keep yearly: 4
106 -* max_depth: full
107 -* prune schedule: 0/6:00
108 -* enabled: true
109 -
110 -=== Verify jobs ===
111 -
112 -Current configuration is:
113 -
114 -* local datastore: proxmox-backup
115 -* namespace: root
116 -* max-depth: full
117 -* schedule: daily
118 -* skip verified: true
119 -* re-verify after: 30 days
120 -
121 -=== Permissions ===
122 -
123 -Permissions are explained in the Access Control section above, but it can be easier to configure permissions from the datastore. Navigate to the datastore Permission tab and add API Token Permission:
124 -
125 -* path: /datastore/proxmox-backup/CLIENT_NAME
126 -* API Token: root@pam!CLIENT_NAME
127 -* Role: DatastoreBackup
128 -* Propagate: true
129 -
130 130  = Tailscale =
131 131  
132 132  Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server.
133 133  
134 -{{code language="bash"}}
135 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
136 -curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list
57 +Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`.
58 +To view the status: `sudo tailscale status`.
137 137  
138 -sudo apt-get update
139 -sudo apt-get install tailscale
60 += Client Configuration =
140 140  
141 -systemctl status tailscaled.service
142 -sudo tailscale up --login-server SUBDOMAIN.schleppe.cloud
143 -tailscale status
144 -{{/code}}
62 +Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences:
63 +\\{{code language="none" width="100%"}}Selection mode: include selected VMs
64 +Send email to: kevin.midboe+{PVE_HOSTNAME}@gmail.com
65 +Email: On failure only
66 +Mode: Snapshot
67 +Enabled: True
68 +Job Comment: {{guestname}}, {{node}}, {{vmid}}{{/code}}
145 145  
146 -Connect to headscale login server:
70 += Methodology =
147 147  
148 -{{code language="none"}}
149 -$ sudo tailscale up --login-server https://SUBDOMAIN.schleppe.cloud
72 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
150 150  
151 -To authenticate, visit:
74 +== Sub-paragraph ==
152 152  
153 - https://SUBDOMAIN.schleppe.cloud/register/nodekey:fe30125f6dc09b2ac387a3b06c3ebc2678f031d07bd87bb76d91cd1890226c9f
76 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
154 154  
155 -Success.
156 -{{/code}}
78 += Proxmox backup server =
157 157  
158 -View more info in the docs: [[https:~~/~~/earvingad.github.io/posts/headscale/>>https://earvingad.github.io/posts/headscale/]]
80 += =
159 159  
160 -= Jottacloud client =
82 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
161 161  
162 -Cloud backup provider used is jottacloud. They provide a cli to easily add directories to sync to their cloud backup storage.
163 -NOTE! This setup still uses user `kevin` and not the correct jottad user.
84 +== Sub-paragraph ==
164 164  
165 -(((
166 -{{code language="none"}}
167 -# install jotta-cli
168 -sudo curl -fsSL https://repo.jotta.cloud/public.asc -o /usr/share/keyrings/jotta.gpg
169 -echo "deb [signed-by=/usr/share/keyrings/jotta.gpg] https://repo.jotta.cloud/debian debian main" | sudo tee /etc/apt/sources.list.d/jotta-cli.list
170 -sudo apt-get update
171 -sudo apt-get install jotta-cli
86 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
172 172  
173 -# configure runtime environment
174 -sudo useradd -m jottad
175 -sudo usermod -a -G jottad backup
176 -{{/code}}
177 -)))
88 +== Sub-paragraph ==
178 178  
179 -Create systemd file: `/usr/lib/systemd/user/jottad.service ` and enable with :
180 -
181 -(((
182 -
183 -
184 -{{code language="ini" layout="LINENUMBERS" title="/usr/lib/systemd/user/jottad.service"}}
185 -[Unit]
186 -Description=Jotta client daemon
187 -
188 -[Service]
189 -Type=notify
190 -# Group=backup
191 -# UMask=0002
192 -
193 -# EnvironmentFile=-%h/.config/jotta-cli/jotta-cli.env
194 -ExecStart=/usr/bin/jottad stdoutlog datadir %h/.jottad/
195 -Restart=on-failure
196 -
197 -[Install]
198 -WantedBy=default.target
199 -{{/code}}
90 +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
200 200  )))
201 201  
202 -== Flaws ==
203 203  
204 -Since proxmox backup server uses chunks for deduplicating data a complete file list is required. This makes it impossible to download a single file representing a VM or LXC, all files must be downloaded and imported into proxmox backup server for reconstruction.
205 -
206 -It also seems like there are a LOT of files shifting - being added and deleted. Making the diff uploaded to jottacloud huge.
207 -
208 -= Client Configuration =
209 -
210 -Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences:
211 -
212 -* selection mode: include selected VMs
213 -* send email to: [[[email protected]>>mailto:[email protected]]]
214 -* email: on failure only
215 -* mode: snapshot
216 -* enabled: true
217 -* job comment: ~{~{guestname}}, ~{~{node}}, ~{~{vmid}}
218 -)))
219 -
220 -
221 221  (% class="col-xs-12 col-sm-4" %)
222 222  (((
223 223  {{box title="**Contents**"}}