docs: refresh repo README and add component guides

This commit is contained in:
Jordan Wages 2025-09-23 22:48:06 -05:00
commit a12e713d1f
3 changed files with 152 additions and 64 deletions

View file

@ -1,71 +1,29 @@
# Proxmox Debian VM Bootstrap Wizard # Proxmox Server Setup
A simple, interactive bootstrap for freshly cloned Debian VMs on Proxmox. Scripts and configuration used to bring new Debian VMs online in a Proxmox cluster and wire them into supporting services. Each component keeps its own focused documentation so you can grab just what you need.
It uses a twostage approach:
1. **Bootstrap script** (stored as `setup.sh` on your VM template) ## Components
- Prompts for root if needed - [Setup Wizard](docs/setup-wizard/README.md): interactive bootstrap that hardens a freshly cloned Debian template, installs tooling, and wires in NAS shares.
- Downloads and runs the latest interactive wizard - [Uptime Kuma Push Client](docs/uptime-kuma/README.md): lightweight agent that reports VM health checks to Uptime Kuma using the push API.
2. **Remote setup wizard** (pulled from Git) - _Future additions_: drop new tooling under `docs/<component>/README.md` and link it here.
- Menudriven (`whiptail`) configuration: updates, sudo, hostname, SSH keys, aliases, fastfetch, CIFS mounts, etc.
--- ## Quick Start
1. **Prepare a Debian template** on Proxmox with either `curl` or `wget` available.
## Prerequisites 2. **Run the bootstrap** on first boot:
- Debianbased VM template
- Network access to `git.jordanwages.com`
- `whiptail` (autoinstalled if missing)
- Either `curl` or `wget`
---
## Installation
1. **Invoke remotely**
On your Debian template, run the following command:
```bash ```bash
curl -s https://git.jordanwages.com/wagesj45/proxmox-server-setup/raw/branch/main/bootstrap.sh | bash curl -fsSL https://git.jordanwages.com/wagesj45/proxmox-server-setup/raw/branch/main/bootstrap.sh | bash
``` ```
The helper script will escalate to root if needed, fetch the latest `setup.sh`, and guide you through hostname, sudo users, SSH keys, updates, and NFS mounts.
3. **Wire monitoring** by scheduling `kuma-push.sh` (eg. via `systemd` timer or cron) once you configure your checks in `kuma-checks.json`.
2. **Follow the wizard** ## Project Layout
Use the onscreen menus to: - `bootstrap.sh` thin wrapper that always pulls the current `setup.sh` from Git and runs it as root.
- `setup.sh` whiptail-based wizard that applies updates, manages SSH keys, configures optional tooling, and maintains NFS mounts.
- `kuma-push.sh` sends heartbeat updates to Uptime Kuma based on the inventory in `kuma-checks.json`.
- `kuma-checks.json` declarative list of HTTP/service/docker/mount/disk checks for the push client.
- `docs/` component documentation; add new READMEs here as the toolkit grows.
* Grant `sudo` to selected users ## Contributing & Maintenance
* Apply system updates - Keep scripts idempotent so they can be re-run safely during template refreshes.
* Set a new hostname - When you add features, document the new behaviour in a dedicated `docs/<component>/README.md` and update the component list above.
* Add SSH keys for root & your primary user - Run through the setup wizard on a throwaway VM after major changes to confirm the full flow still works.
* Install core & optional utilities (`htop`, `jq`, `git`, etc.)
* Configure NFS mounts for your NAS shares
---
## Customization
* **Default SSH key**
Edit the `DEFAULT_SSH_KEY` variable in the remote `setup.sh` to your own public key.
* **Optional packages**
Tweak the `TOOLS=(…)` array in the remote script to add or remove utilities.
* **NFS shares**
Adjust the `NFS_HOSTS=(…)` array to match your NAS hostnames.
---
## Troubleshooting
* **Fetch failures**
Ensure network connectivity and that `curl` or `wget` is installed.
* **Rerunning**
Simply invoke:
```bash
sudo setup.sh
```
again to repeat the wizard.
* **Manually inspect**
If a step errors out, inspect the console output for hints.

View file

@ -0,0 +1,40 @@
# Setup Wizard
Interactive bootstrap for Debian-based VM templates. The wizard is safe to re-run and is designed for Proxmox clones that need the same base hardening, tooling, and share mounts before they enter service.
## Files
- `bootstrap.sh` convenience wrapper you drop on the template. It escalates to root, pulls the latest wizard, and executes it.
- `setup.sh` primary script. Uses `whiptail` menus to configure the VM.
## Requirements
- Debian 12/13 guest with network access to `git.jordanwages.com`.
- Either `curl` or `wget`. The wizard auto-installs `whiptail` if it is missing.
- Ability to escalate to root (`sudo`, root shell, or root password).
## Running The Wizard
1. Place `bootstrap.sh` on the template (or use the `curl | bash` one-liner in the main README).
2. Execute `sudo ./bootstrap.sh` (or simply `./bootstrap.sh` as root).
3. Follow the `whiptail` prompts. You can cancel at any step; no changes are committed until each section completes.
You can re-run `setup.sh` at any time to reapply updates, rotate keys, or adjust NAS mounts. The managed sections overwrite prior state so the machine always matches the latest answers.
## What The Wizard Configures
- **Core packages**: installs `sudo`, `curl`, `gnupg`, `lsb-release`, `nfs-common`, and any optional tools you choose (`htop`, `jq`, `git`, etc.).
- **System updates**: full `apt-get dist-upgrade` with a progress gauge.
- **Sudo access**: toggles passwordless sudo for any non-system users on the box.
- **Hostname**: updates `/etc/hosts` and `hostnamectl`
- **SSH keys**: overwrites `authorized_keys` for `root` and the primary user (`jordanwages` by default) after deduplicating entries.
- **Shell quality-of-life**: installs bat/ncdu aliases and replaces `neofetch` with `fastfetch` for login banners.
- **NFS mounts**: manages a comment-delimited block in `/etc/fstab` for the NAS hosts you select and attempts to mount them immediately.
- **Logging**: tee'd transcript stored at `/var/log/freshbox.log` on each run.
## Customization
- **Default SSH key**: edit `DEFAULT_SSH_KEY` near the top of `setup.sh` to your preferred public key.
- **Optional packages**: expand the `TOOLS=(...)` array; everything listed appears as a checkbox in the “Extra Utilities” menu.
- **NAS hosts**: change the `NFS_HOSTS=(...)` array to match your environment. Each value becomes a selectable share under `/media/<host>`.
- **Primary user**: adjust `USER_NAME` if your golden image uses a different account than `jordanwages`.
## Troubleshooting
- Wizard aborts with missing `whiptail`: rerun the script; it self-installs `whiptail` before launching menus.
- NFS mounts fail: the wizard leaves a ⚠️ note in the final dialog. Check `/var/log/freshbox.log` for the mount error and re-run once networking or permissions are fixed.
- Need to inspect actions: review `/var/log/freshbox.log`, which captures command output from the session.

View file

@ -0,0 +1,90 @@
# Uptime Kuma Push Client
`kuma-push.sh` sends heartbeat updates to your Uptime Kuma instance by iterating through a JSON inventory of checks. It is built for guests that cannot be polled directly but can reach the Kuma push endpoint.
## Files
- `kuma-push.sh` performs the checks and submits results via HTTP GET.
- `kuma-checks.json` declarative list of checks the script will execute.
## Requirements
- `bash`, `jq`, `wget`, and access to `https://status.jordanwages.com` (adjust `BASE` in the script if your Kuma lives elsewhere).
- Optional: `systemd`, `cron`, or another scheduler to run the script every minute.
## Configuration (`kuma-checks.json`)
Each object in the array represents one Kuma monitor and should contain:
| Field | Required | Description |
|-------|----------|-------------|
| `name` | ✔ | Human-friendly label shown inside Kuma. Sent back in the push message. |
| `type` | ✔ | One of `native`, `http`, `service`, `docker`, `mount`, or `disk`. |
| `push` | ✔ | Kuma push token for the monitor. |
| `target` | ◐ | Interpreted per check type (URL, systemd service name, container name, mount path, or disk device). Not used for `native`. |
| `threshold` | ◐ | Only used for `disk` checks. Marks the percentage usage at which the disk turns unhealthy. |
Example:
```json
{
"name": "web_frontend",
"type": "http",
"target": "http://localhost:8080/health",
"push": "xxxxxxxxxxxxxxxx"
}
```
## Check Types
- **native**: Always reports `up`. Useful for “is the agent alive” checks.
- **http**: Issues a `wget` to the target URL and reports `up` when the request succeeds. Ping time is the response time in ms.
- **service**: Calls `systemctl is-active` on the target unit.
- **docker**: Uses `docker inspect` to confirm the container is running.
- **mount**: Uses `mountpoint -q` to verify the target path is mounted.
- **disk**: Finds the mount that corresponds to the device (e.g. `vda5` or `/dev/sda1`) and reports `down` when usage meets/exceeds `threshold`.
## Scheduling
A one-minute cadence keeps Kuma happy without overwhelming it. Two common options:
### systemd timer
Create `/etc/systemd/system/kuma-push.service`:
```ini
[Unit]
Description=Send Uptime Kuma push heartbeats
[Service]
Type=oneshot
ExecStart=/usr/local/bin/kuma-push.sh
```
Create `/etc/systemd/system/kuma-push.timer`:
```ini
[Unit]
Description=Run Uptime Kuma push heartbeats every minute
[Timer]
OnBootSec=1min
OnUnitActiveSec=1min
AccuracySec=5s
[Install]
WantedBy=timers.target
```
Enable both:
```bash
sudo systemctl daemon-reload
sudo systemctl enable --now kuma-push.timer
```
### cron
Copy the script into `/usr/local/bin` and drop the following into `crontab -e`:
```
* * * * * /usr/local/bin/kuma-push.sh >/dev/null 2>&1
```
## Customization Tips
- Update the `BASE` variable inside `kuma-push.sh` if your Kuma origin or token path changes.
- When `curl` is available you can swap `wget` for `curl` calls; the current implementation favours `wget` so the template can stay minimal.
- Add jitter with the `sleep $(( RANDOM % 5 ))` line if you scale to dozens of VMs—feel free to widen the range.
## Troubleshooting
- **Missing dependencies**: install `jq` and `wget` (`apt install jq wget`). The script exits if either is missing.
- **No pushes arriving**: confirm the machine reaches the Kuma endpoint manually with `wget`. Kuma will mark the monitor down automatically when pushes stop.
- **Disk check always down**: ensure the `target` resolves to a mounted device (`lsblk -f` and `findmnt` help confirm the correct name).