- Install Proxmox VE 9.x Terminal UI
- Agree to license terms
- Target disk -
next - Region
- Country:
United States - Timezone:
America/New_York - Keyboard:
US English
- Country:
- Admin
- Password:
password123 - Confirm:
password123 - Email:
admin@pve.lab
- Password:
- Management Interface
- Hostname:
pve.lab - IP:
10.0.0.10/24 - Gateway:
10.0.0.1 - DNS:
1.1.1.1
- Hostname:
- Install -
next - When install completes, remove the USB drive, it will auto-reboot.
**Note:** If you have connected to pve.lab or 10.0.0.10 before,
you will need to remove the existing ssh entry from ~/.ssh/known_hosts
ssh-keygen -R "10.0.0.10"Automated infrastructure setup for a Proxmox-based home lab with DNS, VM templates, PostgreSQL, and infrastructure components.
- Proxmox VE 9.x (required)
- Network: 10.0.0.0/24 subnet with vmbr0 bridge
- Storage: local-lvm storage configured
- SSH access to Proxmox host
- Ed25519 SSH key at
~/.ssh/id_ed25519 - Ansible SSH key will be auto-generated if missing
./init-proxmox.sh $PROXMOX_HOST_IP $PROXMOX_ROOT_PASSWORD $POSTGRES_PASSWORDExample:
./init-proxmox.sh 10.0.0.10 myRootPassword myPostgresPassword- You will be prompted to access the Proxmox instance as a known host.
- Then you will type the root password for the Proxmox instance when prompted.
The initialization takes a few minutes and sets up:
- Proxmox subscription sources disabled
- Proxmox no-subscription repositories enabled
- SSH keys copied to Proxmox host
- Resource pools created:
infra,dev,uat,prod,templates image-builder@pveuser with API token for Cluster API
- BIND9 DNS server configured on Proxmox host
- Proxmox acts as authoritative DNS provider for
.lab .labdomain with dynamic DNS (DDNS) support- TSIG-secured DNS updates
- Ubuntu 24.04 cloud image
- QEMU guest agent enabled
- SSH keys for
johnandansibleusers installed - Auto-registers hostname with DNS on boot
- Note: The base template automatically shuts down after creation. This enables unattended builds.
Clone the template and use a cloud-init "topper" script to override the shutdown behavior (see
blankVM example).
- Simple example in
devpool - Demonstrates how to use base template with cloud-init topper
- 1 core, 1GB RAM, DHCP networking
- Cluster API image builder builder in
infrapool - 2 cores, 4GB RAM, 20GB disk
- Static IP: 10.0.0.222/24
- PostgreSQL 18 server in
devpool - 4 cores, 16GB RAM, 100GB disk
- Static IP: 10.0.0.100/24
- Note: Commented out by default in
init-proxmox.sh(line 33)
- Subnet: 10.0.0.0/24
- Gateway: 10.0.0.1
- DNS: Proxmox host IP (primary), 75.75.75.75 or 8.8.8.8 (fallback)
- Bridge: vmbr0
.
├── init-proxmox.sh # Main initialization script
├── proxmox/
│ ├── proxmox-setup.sh # Proxmox configuration
│ └── dns/ # DNS zone files and BIND config
└── vms/
├── init-base.sh # Base template creation
├── init-blank.sh # Example Blank VM
├── init-postgres.sh # PostgreSQL VM
├── init-capi-manager.sh # Cluster API manager VM
├── base/ # Base template files
├── blank/ # Blank VM files
├── postgres/ # PostgreSQL VM files
└── cluster-api-manager/ # Cluster API manager files
You can run individual components instead of the full initialization:
# Base template only
cd vms
./init-base.sh $PROXMOX_IP
# Blank VM only
cd vms
./init-blank.sh $PROXMOX_IP
# PostgreSQL VM
cd vms
./init-postgres.sh $PROXMOX_IP $POSTGRES_PASSWORD
# Cluster API manager VM
cd vms
./init-capi-manager.sh $PROXMOX_IPAll non k8s capi VMs clone from the base template (VM ID 9999):
# Clone template
qm clone 9999 <VM_ID> --name <VM_NAME> --pool <POOL>
# Configure resources
qm set <VM_ID> --cores <CORES>
qm set <VM_ID> --memory <MEMORY_MB>
# Optional: resize disk
qm resize <VM_ID> scsi0 <SIZE>G
# Set cloud-init topper
qm set <VM_ID> --cicustom "user=local:snippets/<your-cloud-init>.mime"
# Configure networking
qm set <VM_ID> --ipconfig0 "ip=10.0.0.<IP>/24,gw=10.0.0.1"
qm set <VM_ID> --nameserver "<DNS_IP> 8.8.8.8"
# Add tags
qm set <VM_ID> --tags "tag1,tag2"
# Start VM
qm start <VM_ID># Test forward lookup
dig @10.0.0.10 pve.lab
# Test reverse lookup
dig @10.0.0.10 -x 10.0.0.10# Personal user
ssh john@<VM_IP>
# Ansible user (key auto-generated during init-base.sh)
ssh -i ~/.ssh/ansible ansible@<VM_IP># Check BIND status
systemctl status named
# Verify zone files
named-checkzone lab /var/lib/bind/db.lab
named-checkzone 0.0.10.in-addr.arpa /var/lib/bind/db.10.0.0
# Test DNS
dig @localhost pve.labThe base template is designed to shut down automatically after initial setup. This is expected behavior. To use the template, clone it and override the shutdown with a cloud-init topper (see vms/blank/generate-cloud-init-files.sh).
# Verify snippet exists
ls -la /var/lib/vz/snippets/
# Check VM config
qm config <VM_ID>
# Update cloud-init
qm cloudinit update <VM_ID>See home-lab-components.md for planned additions:
- HCP Vault
- Artifactory
- HA Kubernetes
- Ceph Storage
- Grafana/Prometheus
- ArgoCD
- Kiali/Thanos
- SSO