The Home Lab

My photos, passwords, and automation workflows were spread across a dozen SaaS tools and cloud providers (me trying to maximally utilise different free-tiers :P) - each with its own subscription, limits, and general overhead of managing. I replaced all of them with a single self-hosted system running on a repurposed MacBook Air.

Here are the details on how I built a low-cost, production-grade infrastructure that scratches most of my experimentation itch these days.

prince@ubuntu-mac: ~/homelab
prince@ubuntu-mac:~$ neofetch --homelab
Uptime 24/7
Services 9 running
Nodes MacBook Air + GPU helper
Open Ports 0 (Cloudflare Tunnels)
Status All systems operational
prince@ubuntu-mac:~$

How it all connects

All traffic from the internet hits Cloudflare first - TLS, caching, DDoS protection, the works. A lightweight cloudflared daemon on the MacBook maintains an outbound-only tunnel, so nothing on my home network is directly reachable from outside. A second laptop with an NVIDIA GPU sits on the local network purely for Immich's ML workloads (face detection, smart search).

Key tradeoff: I went with Cloudflare Tunnels over port forwarding or a self-managed reverse proxy. Since it eliminates the attack surface entirely and easy to manage for a simple home setup. Zero open ports, no router config to maintain, and has been worth it.

$ cat /etc/homelab/topology.txt
network-topology

The hardware

The heart of the system is an old Intel MacBook Air running Ubuntu Server headless - lid closed, display off (basically for best performance by saving resources to the max possble), running 24/7. I wrote a custom systemd service that handles the lid-close without suspending (that was a fun rabbit hole). A spare Windows laptop with an NVIDIA GPU sits alongside it on the network, but only for one job: running Immich's ML library for face and object detection and is turned on whenver there are heavy ML tasks (think of a bulk upload post vacation :) )

Benefit: No budget needed for dedicated server hardware. Approach: Repurpose what I already had. Less reliable than rack-mounted gear, sure - but the uptime has been surprisingly solid and the cost was literally zero.

$ ssh prince@ubuntu-mac && ssh prince@windows-gpu
ssh: ubuntu-mac
prince@ubuntu-mac
─────────────────────
OSUbuntu Server 22.04
HostIntel MacBook (Headless)
RolePrimary Server
IP192.168.0.104
Uptime24/7 (sleep disabled)
WiFiBroadcom BCM4360
StorageSamsung T7 Shield SSD
DockerCompose V2 (plugin)
SpecialCustom lid-monitor daemon
ssh: windows-gpu
prince@windows-gpu
─────────────────────
OSWindows + WSL2
HostNVIDIA GPU Laptop
RoleML Offload Node
IP192.168.0.197
GPUNVIDIA CUDA (v545+)
ServiceImmich ML (face/object)
Port3003

What's running

Nine services, split across three categories. Everything that needs external access goes through Cloudflare Tunnels - not a single port is opened on the router. Here's what's running and the decisions behind each one.

$ systemctl list-units --type=service --state=running
# ─── Core Infrastructure ───────────────────────────
cloudflared.service - Secure Reverse Tunnel LIVE
Active: active (running)
> Outbound-only tunnel - zero open ports on the router
> Free TLS, DDoS protection, and WAF from Cloudflare
Cloudflare systemd HTTPS
docker.service - Container Runtime LIVE
Active: active (running)
Version Docker CE + Compose V2 (plugin)
> Official Docker repo, not Ubuntu's older version
> systemd-resolved disabled on port 53 for AdGuard
Docker Compose V2
portainer.service - Docker Management UI LIVE
Active: active (running) Port: 9000 -> portainer.princejain.me
> Full Docker socket access for container management
> Protected behind Cloudflare Access authentication
Portainer Docker
# ─── Data & Privacy ────────────────────────────────
immich.service - Self-hosted Photos LIVE
Problem: Google Photos owns your memories. Lock-in, recurring cost, and zero control over something deeply personal. Solution: Local-first photo storage with ML-powered search - all on an external SSD I can unplug and carry.
Active: active (running) Port: 2283 -> immich.princejain.me
Stack Server + PostgreSQL (pgvector) + Redis + ML
Storage Samsung T7 Shield SSD @ /mnt/immich
> External SSD for data portability - host can be replaced
> ML offloaded to GPU node over LAN for face/object detection
> SSD auto-mounts via UUID in /etc/fstab before Docker starts
Docker PostgreSQL Redis ML
vaultwarden.service - Password Manager LIVE
Problem: Your password manager holds the keys to everything. A third-party breach there is genuinely existential. Solution: Self-hosted Bitwarden-compatible vault. Every credential stays on my hardware, synced across all my devices.
Active: active (running) Port: 8888 -> vault.princejain.me
> Self-hosted Bitwarden - all credentials stay on my hardware
Vaultwarden Bitwarden
adguard.service - Network Ad Blocker LIVE
Problem: You can install an ad blocker on a laptop, but good luck doing that on a smart TV or a phone app. Solution: DNS-level filtering. Every device on my network gets ad blocking automatically - no per-device setup.
Active: active (running) Port: 53 (DNS) + 80 (Dashboard)
> Runs in host network mode to bind to port 53
> Required disabling systemd-resolved to free DNS port
AdGuard DNS Privacy
# ─── Productivity ──────────────────────────────────
stirling-pdf.service - PDF Toolkit LIVE
Active: active (running) Port: 8080 -> pdf.princejain.me
> Merge, split, rotate, compress, OCR - all self-hosted
Stirling PDF OCR
n8n.service - Automation Workflows LIVE
Active: active (running) Port: 5678 -> n8n.princejain.me
> Daily automation workflows for personal productivity
n8n Automation
openclaw.service - Personal AI Assistant LIVE
Active: active (running)
> Named him Clawdius - my personal AI assistant
OpenClaw AI

Why I built it this way

Nothing here was accidental. These are the key architectural decisions that shaped the lab - written as comments in a config file, because honestly, that felt right for the vibe.

$ cat /etc/homelab/decisions.conf
decisions.conf
# /etc/homelab/decisions.conf
# Last updated: Mar 2026
# ─────────────────────────────────────────────────────
# [SECURITY]

# WHY: Cloudflare Tunnels over Port Forwarding
Zero open ports on the router. Outbound-only HTTPS
connections. Free TLS, DDoS protection, and WAF.
No need to touch router settings or expose home IP.
# IMPACT: Zero attack surface. No firewall rules to maintain.

# [PORTABILITY]

# WHY: External SSD for Immich Storage
Data lives independently of the host machine.
Portable and upgradeable. Auto-mounts via fstab UUID.
# IMPACT: Full system migration in <10 min, zero data loss.
# The MacBook could be replaced without losing a single photo.

# [SCALABILITY]

# WHY: Offload ML to a separate GPU machine
Face detection and smart search are GPU-hungry. Instead of
cramming it all on the MacBook, a spare Windows laptop with
an NVIDIA GPU handles just Immich's ML library over LAN.
# IMPACT: MacBook CPU stays under 15%. GPU node can go
# offline without breaking Immich - jobs just queue up.

# [COST]

# WHY: Headless MacBook as Server
Custom lid-monitor systemd service turns off display
when lid closes without suspending. Sleep/hibernate
disabled entirely. 24/7 uptime with minimal power draw.
# IMPACT: $0 hardware cost. Replaces ~$15/mo in SaaS subscriptions.

What this actually gives me

The infra is a means, not an end. Here's what it actually delivers on a daily basis.

$ cat /etc/homelab/outcomes.log
outcomes.log
My photos are actually mine 15,000+ photos with ML-powered face detection and search - no cloud, no subscription, no lock-in and full privacy.
Passwords I actually trust Bitwarden-compatible vault synced across all my devices, stored entirely on my hardware
Ads blocked everywhere DNS-level filtering for every device on the network - phones, TVs, laptops - zero per-device config
Workflows that run themselves Daily automations for notifications, data sync, and task management via n8n - set it and forget it
$0/month Replaced atleast $30/mo in cloud and hosting subscriptions with hardware that was collecting dust

What I actually learned

Building this was one thing. Operating it day-to-day taught me stuff no documentation ever would.

$ grep -r "LESSON" /var/log/homelab/
learnings
# Architecture beats policy for security
Cloudflare Tunnels removed an entire class of risks by design.
No firewall rules to audit, no ports to accidentally leave open.
The best security rule is the one you never have to remember.

# Decouple compute from storage (always)
The GPU laptop can go offline and photos still load fine.
The SSD can move to a new host in minutes. This separation
saved me when I had to swap the MacBook's WiFi driver once.

# You are the SRE team
If a container crashes at 2am, it stays down until I notice.
Health checks, restart policies, and Docker's restart: unless-stopped
aren't nice-to-haves - they're what keeps this thing running.

# Constraints are underrated
No budget meant repurposing hardware I already owned.
WiFi-only meant optimising for reliability over throughput.
Every limitation forced a more thoughtful decision.

What's next

It works. But it's not done - and honestly, that's half the fun.

$ cat /etc/homelab/TODO
TODO
# /etc/homelab/TODO
# ─────────────────────────────────────────────────────

[ ] Wired ethernet over WiFi
WiFi has been fine, but it's still a failure point I'd rather
not have. Ethernet would especially help ML offloading latency.

[ ] Proper monitoring (Prometheus + Grafana)
Right now if something dies silently, I find out when I try to
use it. Not great. Can't really call it production-grade without this.

[ ] Automated offsite backups
The SSD is portable, but portable isn't the same as backed up.
Encrypted snapshots to a remote target would close this gap.

[ ] Go deeper on OpenClaw (Clawdius)
Turn Clawdius into a real personal agent, not just a local assistant.
Next step: WhatsApp-based grocery automation (search → list → order).
Also exploring PicoClaw for lightweight, always-on edge inference.

[ ] Failure recovery & self-healing
Right now, restarts are manual or Docker-level.
Adding health checks, auto-restarts, and dependency-aware recovery
would make the system resilient to partial failures.