Changes for page Proxmox Backup server

Last modified by Kevin Wiki on 2024/05/21 21:23

From version 12.1
edited by Kevin Wiki
on 2024/04/06 11:12
Change comment: There is no comment for this version
To version 6.1
edited by Kevin Wiki
on 2024/04/06 10:12
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -4,126 +4,6 @@
4 4  (((
5 5  = Configuration =
6 6  
7 -Backup server is setup with:
8 -
9 -* zfs storage
10 -* access control - api tokens
11 -* datastore
12 -** sync jobs
13 -** prune jobs
14 -** verify jobs
15 -** permissions
16 -* timings and simulator
17 -
18 -= ZFS storage array =
19 -
20 -There are currently 2 x 8TB WD drives. Current pool status:
21 -
22 -{{{kevin@clio:~$ sudo zpool status pergamum
23 - pool: pergamum
24 - state: ONLINE
25 - scan: scrub repaired 0B in 09:52:23 with 0 errors on Sun Mar 10 10:16:24 2024
26 -config:
27 -
28 - NAME STATE READ WRITE CKSUM
29 - pergamum ONLINE 0 0 0
30 - raidz1-0 ONLINE 0 0 0
31 - scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 ONLINE 0 0 0
32 - sdc1 ONLINE 0 0 0
33 -
34 -errors: No known data errors}}}
35 -
36 -
37 -== Creating and expanding zfs pool ==
38 -
39 -```
40 -zpool create pergamum raidz /dev/disk/by-partuuid/9fab17e5-df2d-2448-b5d4-10193c673a6b /dev/disk/by-partuuid/f801ed37-1d6c-ee40-8b85-6bfc49aba0fb -f
41 -zfs set mountpoint=/mnt/pergamum pergamum
42 -(zpool import -c /etc/zfs/zpool.cache -aN)
43 -zpool export pergamum
44 -```
45 -
46 -
47 -have not tried yet, but adding another set of disks for an additional top-level virtual device to our existing RAID-Z pool:
48 -```
49 -zpool add -n pergamum raidz DISK1 DISK2
50 -```
51 -~> NOTE! `-n` is dry run, remove to commit.
52 -
53 -
54 -= Access Control =
55 -
56 -Each client host that wants to backup their contents to the backup server should have their unique API token for authentication.
57 -
58 -API Token:
59 -
60 -* user: [[root@pam>>mailto:root@pam]]
61 -* token name: CLIENT_NAME
62 -* expire: never
63 -* enabled: true
64 -
65 -Permissions - Add a API Token Permission:
66 -
67 -* path: /datastore/proxmox-backup/CLIENT_NAME
68 -* api token: root@pam!CLIENT_NAME
69 -* role: DatastoreBackup
70 -* propagate: true
71 -
72 ->Note! The path will not be define until after the Datastore namespace is define in the steps below
73 -
74 -= Proxmox datastore =
75 -
76 -If none exists create the datastore. Ours point is named `proxmox-backup` and points to ZFS storage mounted at `/mnt/pergamum`. All references to `proxmox-backup` referes to what you named it as in the create step here.
77 -
78 -=== Namespace ===
79 -
80 -Namespaces is what we will use in a datastore to separate permissions to each host. It's important to create these for the API tokens create in Access Control section above.
81 -
82 -=== Prune & Garbage collect ===
83 -
84 -We don't require backups for every day of the year. Pruning lets you systematically delete older backups, retaining backups for the last given number of time intervals. There exists a fantastic simulator that can be used to experiment with different backup schedules and prune options: [[https:~~/~~/pbs.proxmox.com/docs/prune-simulator/>>https://pbs.proxmox.com/docs/prune-simulator/]]. The current configuration is:
85 -
86 -* datastore: proxmox-backup
87 -* namespace: root
88 -* keep last: 4
89 -* keep: hourly: -
90 -* keep daily: 6
91 -* keep weekly: 3
92 -* keep monthly: 6
93 -* keep yearly: 4
94 -* max_depth: full
95 -* prune schedule: 0/6:00
96 -* enabled: true
97 -
98 -=== Verify jobs ===
99 -
100 -Current configuration is:
101 -
102 -* local datastore: proxmox-backup
103 -* namespace: root
104 -* max-depth: full
105 -* schedule: daily
106 -* skip verified: true
107 -* re-verify after: 30 days
108 -
109 -=== Permissions ===
110 -
111 -Permissions are explained in the Access Control section above, but it can be easier to configure permissions from the datastore. Navigate to the datastore Permission tab and add API Token Permission:
112 -
113 -* path: /datastore/proxmox-backup/CLIENT_NAME
114 -* API Token: root@pam!CLIENT_NAME
115 -* Role: DatastoreBackup
116 -* Propagate: true
117 -
118 -= Tailscale =
119 -
120 -Tailscale is used to create a network that uses wireguard to transparently between local and remote machines. To not require a third party a local instance of headscale is used as the tailscale login server.
121 -
122 -Setting up a connection should only require `sudo tailscale up ~-~-login-server https:~/~/TAILSCALE_SUBDOMAIN.schleppe.cloud`.
123 -To view the status: `sudo tailscale status`.
124 -
125 -= Client Configuration =
126 -
127 127  Configure Backup on the Datacenter or PVE host level in the proxmox web GUI. If a backup storage is already added input the following preferences:
128 128  \\{{code language="none" width="100%"}}Selection mode: include selected VMs
129 129  Send email to: kevin.midboe+{PVE_HOSTNAME}@gmail.com
... ... @@ -132,6 +132,73 @@
132 132  Enabled: True
133 133  Job Comment: {{guestname}}, {{node}}, {{vmid}}{{/code}}
134 134  
15 += {{code language="yaml" layout="LINENUMBERS"}}apiVersion: apps/v1
16 +kind: DaemonSet
17 +metadata:
18 + name: prometheus-node-exporter-daemonset
19 + namespace: monitoring
20 + labels:
21 + app: prometheus-node-exporter
22 +spec:
23 + selector:
24 + matchLabels:
25 + app: prometheus-node-exporter
26 + template:
27 + metadata:
28 + labels:
29 + app: prometheus-node-exporter
30 + spec:
31 + containers:
32 + - args:
33 + - --path.procfs=/host/proc
34 + - --path.sysfs=/host/sys
35 + - --path.rootfs=/host/root
36 + - --web.listen-address=:9100
37 + image: quay.io/prometheus/node-exporter:latest
38 + imagePullPolicy: IfNotPresent
39 + name: prometheus-node-exporter
40 + ports:
41 + - name: metrics
42 + containerPort: 9100
43 + hostPort: 9100
44 + securityContext:
45 + allowPrivilegeEscalation: false
46 + volumeMounts:
47 + - mountPath: /host/proc
48 + name: proc
49 + readOnly: true
50 + - mountPath: /host/sys
51 + name: sys
52 + readOnly: true
53 + - mountPath: /host/root
54 + mountPropagation: HostToContainer
55 + name: root
56 + readOnly: true
57 + hostNetwork: true
58 + hostPID: true
59 + restartPolicy: Always
60 + tolerations:
61 + - key: "node-role.kubernetes.io/master"
62 + effect: "NoSchedule"
63 + volumes:
64 + - hostPath:
65 + path: /proc
66 + type: ""
67 + name: proc
68 + - hostPath:
69 + path: /sys
70 + type: ""
71 + name: sys
72 + - hostPath:
73 + path: /
74 + type: ""
75 + name: root
76 + updateStrategy:
77 + rollingUpdate:
78 + maxSurge: 0
79 + maxUnavailable: 1
80 + type: RollingUpdate{{/code}} =
81 +
135 135  = Methodology =
136 136  
137 137  Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
... ... @@ -161,5 +161,11 @@
161 161  {{box title="**Contents**"}}
162 162  {{toc/}}
163 163  {{/box}}
111 +
112 +[[image:[email protected]]]
113 +//Figure 1: [[Sea>>https://commons.wikimedia.org/wiki/File:Isle_of_Icacos_II.jpg]]//
114 +
115 +[[image:[email protected]]]
116 +//Figure 2: [[Waves>>https://commons.wikimedia.org/wiki/File:Culebra_-_Playa_de_Flamenco.jpg]]//
164 164  )))
165 165  )))