shithub.sh — first-deploy setup guide
This is the operator's running-order for taking shithub.sh from "Namecheap registration" to "live and serving signups." Walk it top-to-bottom, one step at a time. Each step has a verify-it- worked check. Don't skip the verifications — they're cheap and catch the wrong thing before it compounds.
Time budget. Total ~5–8 hours across 1–2 days, dominated by DNS / Postmark verification waits (you can step away during those).
Money budget.
$3.50/day from Step 4 onwards ($105/mo all-in once Spaces buckets are provisioned). If you have to pause for a day, fine; if you need to pause for a week, destroy the droplets — the volume + Spaces + DNS persist.
Decisions baked in
- Domain: shithub.sh (you registered this on Namecheap)
- Region: NYC3 (DO Spaces parity, codebase default)
- DR region: SFO3 (cross-region Spaces mirror)
- Email: Postmark (free tier covers v0.1.0)
- Mirror plan: 90-day GitHub mirror, then drop
- Version: v0.1.0 (pre-1.0; honest about WIP; tag will be cut later)
- On-call: email-only alerts for week 1, flip to phone after noise calibration
If any of these change, redo the steps that depend on them.
Phase A — Accounts & DNS (do these first; they propagate while you wait)
A1. Postmark account
- Go to https://account.postmarkapp.com/sign_up.
- Sign up with the email you'll use for ops
(
ops@shithub.shis a good convention; you'll set up the inbox later). - After signup, Postmark drops you in a default Server. Rename it: top-left dropdown → Manage Servers → click the default → Settings → name it shithub-prod.
- Confirm the Server has the Transactional message stream (default) — it does for new accounts.
- Don't grab the API token yet — we generate it after verifying the domain.
Verify: the Servers list shows one server named
shithub-prod.
A2. Verify the sender domain in Postmark
This is the slow step. Start it now; come back later.
- In the Postmark dashboard: top-right Sender Signatures → Domains tab → Add Domain.
- Enter
shithub.sh(the bare apex — DKIM applies to all subdomains). - Postmark presents three DNS records to add:
- DKIM — a
TXTrecord at<postmark-prefix>._domainkey - Return-Path — a
CNAMEatpm-bounces(for VERP-style bounce handling) - (optional) DMARC — a
TXTat_dmarc— Postmark will suggest one; we'll add it.
- DKIM — a
- Leave that tab open; we add the records in Namecheap next.
A3. Add Postmark DNS records in Namecheap
- https://ap.www.namecheap.com/Domains/DomainControlPanel/shithub.sh/advancedns
- Advanced DNS tab. Set NameServers to Namecheap BasicDNS if it isn't already (the default after a fresh registration).
- Add a New Record for each of the three Postmark records.
Match the exact
HostandValuePostmark gave you. TTL = 1 min during setup (we'll relax it later). - Also add an SPF record if there isn't one already:
- Type:
TXT - Host:
@ - Value:
v=spf1 include:spf.mtasv.net ~all - (
spf.mtasv.netis Postmark's relay; the~allis soft-fail.)
- Type:
- DMARC record (recommended; Postmark prompts you):
- Type:
TXT - Host:
_dmarc - Value:
v=DMARC1; p=none; rua=mailto:dmarc-rua@shithub.sh; pct=100 p=nonemeans "report only, don't reject" — appropriate for week 1 while we tune. Tighten top=quarantinelater.
- Type:
Verify: in Postmark's Domains tab, click Verify. DKIM may take 5–30 min to propagate. Other records refresh on their own. The verification turns green once all records are seen. Move on; come back to confirm.
A4. Grab the Postmark API token
You can do this immediately after creating the Server — domain verification is independent of token issuance.
- Postmark → Servers → shithub-prod → API Tokens tab.
- The default Server token shown there is what we'll use.
- Copy it. Keep in your password manager.
About the From address. Postmark has no "Sender From" field
on the Server — the From string lives in the body of each API
call. Once your domain is DKIM-verified (Phase A2), every
address *@shithub.sh is authorized to send; no per-address
"Sender Signature" needed. We pass the literal From string to
Postmark via the inventory in Phase D3:
email_from: shithub <noreply@shithub.sh>
The noreply@ mailbox doesn't need to actually exist as an
inbox — replies to it bounce, which is the documented behavior
(docs/public/user/notifications.md notes that reply-by-email
isn't supported).
A5. Set up DNS for the app + docs subdomains
Still in Namecheap Advanced DNS for shithub.sh. Add:
Arecord, Host@, Value<APP-DROPLET-IP>— placeholder for now; we'll fill the real IP after creating the droplet in Phase B. Pin TTL to 1 min for now.Arecord, Hostwww, Value<APP-DROPLET-IP>— same.CNAMErecord, Hostdocs, Valueshithub-docs.nyc3.cdn.digitaloceanspaces.com.(Spaces CDN — we'll create the bucket in Phase B; the CNAME resolves once the bucket exists.) The trailing dot matters.
Skip the records you don't have IPs for yet; they go in after Phase B step B3.
A6. Telegram bot for alerts (skipped — week-1 email only)
We're using email-only alerts for week 1. Email goes via
Postmark to your ops mailbox. When you flip to phone alerts:
follow docs/internal/runbooks/incidents.md and add a Telegram
bot — won't repeat that here.
Phase B — DigitalOcean infrastructure
B1. DO project + SSH key
These are two unrelated resources — the project is a workspace grouping, the SSH key is an account-wide credential. You'll attach the key to each droplet at create time (Phase B3).
Project:
- https://cloud.digitalocean.com → Projects (left sidebar) → New Project.
- Name: shithub-prod. Purpose: Service or API. Environment: Production.
SSH key — easiest path: add it during droplet creation in B3.
DO's standalone "SSH Keys" settings page keeps moving around the UI, so the most reliable place to add a new key is the droplet create form itself:
- In Phase B3 step 6 ("Authentication"), click "+ New SSH Key"
on the form. Paste
~/.ssh/id_ed25519.pubfrom your laptop, name it after the laptop ("macbook-pro"). The key gets saved to your account permanently AND attached to this droplet. - For droplets #2–4, the key is already in the list — just tick its checkbox.
If you want to add the key in advance via the standalone page: use the dashboard's top search bar — type "SSH" — and it'll surface the current location. (Path varies; "Settings → Security → SSH Keys" used to work, but DO reorganises this page often.)
Verify: the project shows under Projects. The SSH key verification happens at droplet-create time when you tick the box.
B2. Create Spaces buckets (do these BEFORE droplets — the docs CNAME depends on the docs bucket existing)
About this section. DO's web UI for Spaces changes regularly (region availability, form layout, post-create settings paths). This section describes what each bucket needs to be, not where to click. Find the create form via the dashboard's left sidebar (Spaces Object Storage at the time of writing) or the top search bar — type "Spaces". For a UI-free path, see Phase B0 (
provision-do.sh) at the bottom of this guide.
You need three buckets. Storage type for all three: Standard. (Cold Storage has a 30-day minimum retention that surprise-bills when our daily backups churn.) First bucket triggers the $5/mo Spaces subscription which covers all three up to 250 GiB total
- 1000 GiB bandwidth.
| # | Bucket name | Region | CDN | Notes |
|---|---|---|---|---|
| 1 | shithub-backups |
Region A — pick whichever DO offers (e.g. SFO3) | off | Primary backups (WAL + daily pg_dump). |
| 2 | shithub-backups-dr |
Region B — DIFFERENT region from A | off | Cross-region DR mirror; pick anything other than A. |
| 3 | shithub-docs |
Same as A | on | Docs site frontend; CDN serves docs.shithub.sh. |
After all three exist:
- Assign the docs custom domain. Go to the
shithub-docsbucket → its CDN settings (path varies; the create form notes "you can assign a custom domain in CDN settings after the Space is created"). Set custom domain todocs.shithub.sh. DO will tell you the CNAME target it expects on your DNS; match the value in Phase A5 to that. - Generate Spaces access keys. Find the Spaces Keys
management page (left sidebar API section, or top search
bar → "Spaces Keys"). Generate a new key named
shithub-prod-app. Copy the secret immediately — only shown once.
Verify: three buckets listed under Spaces. Endpoint URL
follows <bucket>.<region>.digitaloceanspaces.com. The
inventory s3_endpoint field gets <region>.digitaloceanspaces.com
(no bucket name in front).
Project assignment: if you haven't created a shithub-prod
project yet, put the buckets in any existing project for now —
they're trivially moved later via the dashboard. Project
membership is workspace-grouping, not access control.
B3. Create the four droplets
UI-stable description. The DO droplet-create form changes field layouts every few quarters. This section describes the shape each droplet needs to take; locate the create form via the dashboard's Droplets sidebar entry or top search bar.
Required for each droplet:
- Image: Ubuntu 24.04 LTS x64 (Distributions tab; not Marketplace).
- Region: same region as the primary Spaces bucket (Region A from B2). All four droplets in the same region keeps intra-VPC traffic free.
- VPC Network: the default VPC in that region. All four droplets MUST be in the same VPC — that's how they reach each other over private IPs.
- Authentication: SSH Key. On the first droplet, click
"+ New SSH Key" and paste your laptop's
~/.ssh/id_ed25519.pub(this saves it to your account). On droplets #2–4, just tick the same key. - DO Backups: off (our own backup pipeline runs).
- DO Monitoring: on (free agent, useful baseline metrics).
- Tags:
shithubplus a per-role tag (e.g.,shithub-app). - Project: the project you're putting everything in.
Per-droplet variations:
| # | Hostname | Size (DO slug) | Cost/mo | Per-role tag |
|---|---|---|---|---|
| 1 | shithub-app |
s-2vcpu-4gb | $24 | shithub-app |
| 2 | shithub-db |
s-2vcpu-4gb | $24 | shithub-db |
| 3 | shithub-backup |
s-1vcpu-2gb | $12 | shithub-backup |
| 4 | shithub-monitoring |
s-2vcpu-4gb | $24 | shithub-monitoring |
Size selection: in the create form, look for Basic plan → Regular SSD → the size matrix. The slug names above are DO's API identifiers and appear under each tile in the form.
Repeat for droplets #2–#4 with these differences:
| # | Hostname | Size | Tags |
|---|---|---|---|
| 2 | shithub-db |
s-2vcpu-4gb ($24/mo) | shithub, shithub-db |
| 3 | shithub-backup |
s-1vcpu-2gb ($12/mo) | shithub, shithub-backup |
| 4 | shithub-monitoring |
s-2vcpu-4gb ($24/mo) | shithub, shithub-monitoring |
Capture the IPs. For each droplet, write down both the
public IPv4 (for SSH from your laptop, and for shithub-app
to point DNS at) and the private IPv4 (for inter-droplet
traffic — appears under "Private IPv4" in the droplet detail).
Now go back to Phase A5 and update the A records for @
and www to point at shithub-app's public IPv4. The CNAME
for docs you set already; it will start resolving once
shithub-docs CDN is ready (~1 min).
B4. Create + attach the block volume
- Left sidebar → Volumes → Create Volume.
- Region: NYC3.
- Size: 100 GB ($10/mo).
- Filesystem format: Ext4. Mount options: automatic.
- Attach to droplet:
shithub-app. - Mount point:
/mnt/shithub_data(DO's default for the volume name). - Create + attach.
Verify (after SSHing in B5):
df -h /mnt/shithub_data # should show ~100 GB Ext4 mounted
We'll move the mount to /data (where the playbook expects it)
during Phase C bind-mount step.
B5. SSH-bootstrap — confirm you can reach all four droplets
From your laptop:
# Replace IPs with the public IPv4 of each droplet.
ssh root@<APP-IP> # shithub-app
ssh root@<DB-IP> # shithub-db
ssh root@<BACKUP-IP> # shithub-backup
ssh root@<MONITORING-IP> # shithub-monitoring
If Permission denied (publickey), your SSH key didn't get
attached at create time; add it via DO console (Droplet →
Access → Reset Root Password is the only fallback that works
without an existing key).
Verify: whoami returns root on each.
B6. Bootstrap inter-droplet SSH
Ansible will use shithub-app as its control node (we install it there in Phase C). For Ansible to reach the other three over the private network, shithub-app needs an SSH key authorized on each.
On shithub-app:
ssh-keygen -t ed25519 -f /root/.ssh/id_ed25519 -N ""
cat /root/.ssh/id_ed25519.pub
Copy that public key. On shithub-db, shithub-backup, and shithub-monitoring:
mkdir -p /root/.ssh
cat >> /root/.ssh/authorized_keys <<'EOF'
<paste shithub-app's pubkey here>
EOF
chmod 600 /root/.ssh/authorized_keys
Verify (from shithub-app):
ssh root@<DB-PRIVATE-IP> hostname # → shithub-db
ssh root@<BACKUP-PRIVATE-IP> hostname # → shithub-backup
ssh root@<MONITORING-PRIVATE-IP> hostname # → shithub-monitoring
If asked about host key fingerprints, say yes.
Phase C — Hand Claude Code the keyboard
C1. Install Claude Code on shithub-app
On shithub-app:
apt-get update
apt-get install -y curl ca-certificates
curl -fsSL https://claude.ai/install.sh | sh
Then run claude and authenticate with the same Anthropic
account you're using on your laptop (browser flow on your
laptop, paste the code into the SSH terminal).
C2. Install build dependencies on shithub-app
These are needed to build the shithubd binary on the droplet:
apt-get install -y \
git make build-essential \
ansible \
golang-go
# Verify Go ≥ 1.22 (the project targets 1.26).
go version
If the apt Go is too old (Ubuntu 24.04 ships ~1.22 which is fine), skip the next part. If you need a newer Go, install via the tarball method:
curl -LO https://go.dev/dl/go1.22.5.linux-amd64.tar.gz
rm -rf /usr/local/go
tar -C /usr/local -xzf go1.22.5.linux-amd64.tar.gz
echo 'export PATH=$PATH:/usr/local/go/bin' >> /root/.bashrc
source /root/.bashrc
go version
C3. Clone the source
mkdir -p /root/src && cd /root/src
git clone https://github.com/tenseleyFlow/shithub.git
cd shithub
git log --oneline -3 # confirm latest commit
C4. Move the volume mount to /data
The Ansible playbook expects /data as the data root. We could
edit the inventory, but it's cleaner to bind-mount the
DO-attached volume to /data:
mkdir -p /data
mount --bind /mnt/shithub_data /data
echo '/mnt/shithub_data /data none bind 0 0' >> /etc/fstab
# Verify the bind mount survives the next reboot test:
mount | grep -E '/(data|mnt/shithub_data)'
C5. Hand off to Claude Code
From your SSH session on shithub-app, run:
cd /root/src/shithub
claude
When Claude is up, paste this priming message:
This is the shithub deploy you've been planning. The repo is at github.com/tenseleyFlow/shithub. You wrote the sprint specs at .docs/sprints/, especially S37-deploy.md and S40-launch.md. We are at Phase D of
deploy/cutover/SETUP-GUIDE.md— please read that file and the prerequisite docs, then walk me through Phase D with my hands on the keyboard. Domain is shithub.sh. Postmark is set up. Spaces buckets are: shithub-prod (NYC3, private), shithub-prod-dr (SFO3, private), shithub-docs (NYC3, public). Volume bind-mounted at /data. Other droplets reachable via private IPs.
Claude will pick up from there. The rest of this guide is written for Claude (or you, if you'd rather drive yourself).
Phase D — Inventory + secrets
D1. Copy the production inventory template
cd /root/src/shithub
cp deploy/ansible/inventory/production.example deploy/ansible/inventory/production
The bare name production is gitignored so secrets stay out of
the repo.
D2. Generate the cryptographic secrets
# Session signing key (cookie MAC).
openssl rand -base64 32 > /tmp/session_key
# TOTP AEAD key (encrypts 2FA secrets at rest).
openssl rand -base64 32 > /tmp/totp_key
# Postgres passwords.
openssl rand -base64 24 > /tmp/db_password
openssl rand -base64 24 > /tmp/hook_password
# WireGuard private keys (one per host).
for h in app db backup monitoring; do
wg genkey > /tmp/wg_${h}.key
wg pubkey < /tmp/wg_${h}.key > /tmp/wg_${h}.pub
done
D3. Fill in the inventory
$EDITOR deploy/ansible/inventory/production
Fill in:
app_host,db_host,backup_host,monitoring_host— the private IPv4 of each droplet.domain: shithub.shcaddy_email: ops@shithub.sh(Let's Encrypt notifications)db_password,hook_password— paste from/tmp/.session_key,totp_key— paste from/tmp/.s3_endpoint: nyc3.digitaloceanspaces.com(DigitalOcean Spaces via the S3-compatible API)s3_region: us-east-1(Spaces uses this for SigV4)s3_bucket: shithub-prods3_access_key_id,s3_secret_access_key— from B2 step 5.email_backend: postmarkpostmark_server_token— from A4.email_from: shithub <noreply@shithub.sh>auth_base_url: https://shithub.sh- WireGuard peer keys from
/tmp/wg_*.{key,pub}.
After filling in:
chmod 600 deploy/ansible/inventory/production
shred -u /tmp/session_key /tmp/totp_key /tmp/db_password \
/tmp/hook_password /tmp/wg_*.key
(Keep the public WireGuard keys in case you need to add a peer later; the private keys are now only in the inventory.)
Phase E — Deploy
E1. Dry-run
cd /root/src/shithub
make deploy-check ANSIBLE_INVENTORY=production
Read the diff. Expect every host to be changed. If any host
shows unreachable, the SSH bootstrap from B6 missed a droplet.
E2. Build the binary with the version stamp
We're not cutting the v0.1.0 tag yet — that's a launch-day ceremony. For this first deploy, the binary will stamp the short commit + build time, which is fine; the soft-launch window catches surprises before we tag.
make build
./bin/shithubd version
# expect:
# Version: <short-commit-or-dev>
# Commit: <short-commit>
# Built: <today, UTC>
E3. Apply
make deploy ANSIBLE_INVENTORY=production
Expect ~15–30 min on the first run. The roles run in this order:
base → postgres → shithubd → caddy → wireguard → backup →
monitoring-client. Caddy will obtain real Let's Encrypt certs
on first request; caddy_use_acme_staging should be false in
the inventory so we get a real cert.
If a role fails, stop. Re-running with --limit and the
specific role tag is the surgical path. Read the journal of the
failing service before retrying:
journalctl -u <service> -n 200
E4. Bootstrap the admin (you)
ssh root@<APP-PRIVATE-IP>
sudo -u shithub /usr/local/bin/shithubd admin bootstrap-admin \
--email you@your-current-email.com
The CLI prints a one-time password-reset link. Open it in a browser, set a password, immediately enable 2FA.
E5. Smoke
deploy/cutover/smoke.sh https://shithub.sh
If everything is green, the soft launch is live. Signups are
still gated (we set SHITHUB_AUTH__SIGNUP_DISABLED=true in the
inventory by default for soft launch); you're the only user.
Phase F — Soft launch (24–48h)
You're now using shithub yourself. Things to do:
- Push the project's source. Create the
shithuborg, create theshithubrepo under it, push from this checkout. - Walk every flow. Signup (toggle the gate, verify, gate again), password reset, 2FA, SSH key add (the SSH transport isn't shipped, so this just stores the key — fine), PAT create, repo create, push, issue, PR, review, merge, search.
- Note every rough edge. File issues against
shithub/shithubitself. - Watch the dashboards. Grafana on the monitoring host.
- First daily backup runs at the next 03:00 UTC. Confirm it landed in Spaces the morning after.
- First restore drill. When the second daily backup exists, run the drill.
Day 2 morning: you have a real instance with real data + a real backup. You're as ready as you'll ever be.
Phase G — Public launch
G1. Tag v0.1.0
cd /root/src/shithub
git tag -a v0.1.0 -m "v0.1.0 — initial public release"
git push origin v0.1.0
G2. Build and re-deploy with the tagged version
git fetch --tags
git checkout v0.1.0
make build
./bin/shithubd version # Version: v0.1.0
make deploy ANSIBLE_INVENTORY=production
Now the home page's Version field reads v0.1.0. Verify in a
browser.
G3. Open signups
Edit the inventory:
$EDITOR deploy/ansible/inventory/production
# Change: signup_disabled: false
make deploy ANSIBLE_INVENTORY=production ANSIBLE_TAGS=app
G4. Update the status page
$EDITOR docs/public/status.md
# Update timestamp + "All systems normal."
make docs # local mdBook build
deploy/docs-site/sync-to-spaces.sh
G5. Post the announcement
docs/blog/v0.1.0-launch.md is the copy. Submit to:
- Hacker News
- /r/programming, /r/selfhosted
- lobste.rs
- Mastodon
You're live. Read docs/internal/runbooks/day-one.md next.
When something goes wrong
- Caddy won't get a cert. DNS hasn't fully propagated, or
port 80 is blocked. Check both. Last resort: flip
caddy_use_acme_staging=true, redeploy, get the staging cert to confirm everything else works, then flip back when DNS is good. /readyzreturns 503. DB or storage unreachable. Checkjournalctl -u shithubd-webfor the specific error.- Postmark says emails go through but they don't arrive. Check spam, then check Postmark's Activity tab — every send is logged. DKIM may not be propagated yet; first 24h is rough on cold-start deliverability.
- Ansible reports "host unreachable" mid-run. SSH from shithub-app to the failing host's private IP; if that doesn't work, you're missing the key in B6.
Anything else: read docs/internal/runbooks/incidents.md and
the troubleshooting.md in self-host docs.
View source
| 1 | # shithub.sh — first-deploy setup guide |
| 2 | |
| 3 | This is the operator's running-order for taking shithub.sh from |
| 4 | "Namecheap registration" to "live and serving signups." Walk it |
| 5 | top-to-bottom, one step at a time. Each step has a verify-it- |
| 6 | worked check. **Don't skip the verifications** — they're cheap |
| 7 | and catch the wrong thing before it compounds. |
| 8 | |
| 9 | > **Time budget.** Total ~5–8 hours across 1–2 days, dominated |
| 10 | > by DNS / Postmark verification waits (you can step away during |
| 11 | > those). |
| 12 | |
| 13 | > **Money budget.** ~$3.50/day from Step 4 onwards |
| 14 | > (~$105/mo all-in once Spaces buckets are provisioned). If you |
| 15 | > have to pause for a day, fine; if you need to pause for a week, |
| 16 | > destroy the droplets — the volume + Spaces + DNS persist. |
| 17 | |
| 18 | ## Decisions baked in |
| 19 | |
| 20 | - Domain: **shithub.sh** (you registered this on Namecheap) |
| 21 | - Region: **NYC3** (DO Spaces parity, codebase default) |
| 22 | - DR region: **SFO3** (cross-region Spaces mirror) |
| 23 | - Email: **Postmark** (free tier covers v0.1.0) |
| 24 | - Mirror plan: **90-day GitHub mirror, then drop** |
| 25 | - Version: **v0.1.0** (pre-1.0; honest about WIP; tag will be cut later) |
| 26 | - On-call: **email-only alerts for week 1**, flip to phone after |
| 27 | noise calibration |
| 28 | |
| 29 | If any of these change, redo the steps that depend on them. |
| 30 | |
| 31 | --- |
| 32 | |
| 33 | ## Phase A — Accounts & DNS (do these first; they propagate while you wait) |
| 34 | |
| 35 | ### A1. Postmark account |
| 36 | |
| 37 | 1. Go to <https://account.postmarkapp.com/sign_up>. |
| 38 | 2. Sign up with the email you'll use for ops |
| 39 | (`ops@shithub.sh` is a good convention; you'll set up the |
| 40 | inbox later). |
| 41 | 3. After signup, Postmark drops you in a default Server. |
| 42 | Rename it: top-left dropdown → **Manage Servers** → click the |
| 43 | default → **Settings** → name it **shithub-prod**. |
| 44 | 4. Confirm the Server has the **Transactional** message stream |
| 45 | (default) — it does for new accounts. |
| 46 | 5. **Don't grab the API token yet** — we generate it after |
| 47 | verifying the domain. |
| 48 | |
| 49 | **Verify:** the Servers list shows one server named |
| 50 | `shithub-prod`. |
| 51 | |
| 52 | ### A2. Verify the sender domain in Postmark |
| 53 | |
| 54 | This is the slow step. Start it now; come back later. |
| 55 | |
| 56 | 1. In the Postmark dashboard: top-right **Sender Signatures** → |
| 57 | **Domains** tab → **Add Domain**. |
| 58 | 2. Enter **`shithub.sh`** (the bare apex — DKIM applies to all |
| 59 | subdomains). |
| 60 | 3. Postmark presents three DNS records to add: |
| 61 | - **DKIM** — a `TXT` record at `<postmark-prefix>._domainkey` |
| 62 | - **Return-Path** — a `CNAME` at `pm-bounces` (for VERP-style |
| 63 | bounce handling) |
| 64 | - **(optional) DMARC** — a `TXT` at `_dmarc` — Postmark will |
| 65 | suggest one; we'll add it. |
| 66 | 4. Leave that tab open; we add the records in Namecheap next. |
| 67 | |
| 68 | ### A3. Add Postmark DNS records in Namecheap |
| 69 | |
| 70 | 1. <https://ap.www.namecheap.com/Domains/DomainControlPanel/shithub.sh/advancedns> |
| 71 | 2. **Advanced DNS** tab. Set **NameServers** to **Namecheap |
| 72 | BasicDNS** if it isn't already (the default after a fresh |
| 73 | registration). |
| 74 | 3. **Add a New Record** for each of the three Postmark records. |
| 75 | Match the exact `Host` and `Value` Postmark gave you. TTL = |
| 76 | 1 min during setup (we'll relax it later). |
| 77 | 4. Also add an **SPF record** if there isn't one already: |
| 78 | - Type: `TXT` |
| 79 | - Host: `@` |
| 80 | - Value: `v=spf1 include:spf.mtasv.net ~all` |
| 81 | - (`spf.mtasv.net` is Postmark's relay; the `~all` is |
| 82 | soft-fail.) |
| 83 | 5. **DMARC record** (recommended; Postmark prompts you): |
| 84 | - Type: `TXT` |
| 85 | - Host: `_dmarc` |
| 86 | - Value: `v=DMARC1; p=none; rua=mailto:dmarc-rua@shithub.sh; pct=100` |
| 87 | - `p=none` means "report only, don't reject" — appropriate |
| 88 | for week 1 while we tune. Tighten to `p=quarantine` later. |
| 89 | |
| 90 | **Verify:** in Postmark's **Domains** tab, click **Verify**. |
| 91 | DKIM may take 5–30 min to propagate. Other records refresh on |
| 92 | their own. The verification turns green once all records are |
| 93 | seen. Move on; come back to confirm. |
| 94 | |
| 95 | ### A4. Grab the Postmark API token |
| 96 | |
| 97 | You can do this immediately after creating the Server — domain |
| 98 | verification is independent of token issuance. |
| 99 | |
| 100 | 1. Postmark → **Servers** → **shithub-prod** → **API Tokens** |
| 101 | tab. |
| 102 | 2. The default Server token shown there is what we'll use. |
| 103 | 3. **Copy it.** Keep in your password manager. |
| 104 | |
| 105 | **About the From address.** Postmark has no "Sender From" field |
| 106 | on the Server — the From string lives in the body of each API |
| 107 | call. Once your domain is DKIM-verified (Phase A2), every |
| 108 | address `*@shithub.sh` is authorized to send; no per-address |
| 109 | "Sender Signature" needed. We pass the literal From string to |
| 110 | Postmark via the inventory in Phase D3: |
| 111 | |
| 112 | ``` |
| 113 | email_from: shithub <noreply@shithub.sh> |
| 114 | ``` |
| 115 | |
| 116 | The `noreply@` mailbox doesn't need to actually exist as an |
| 117 | inbox — replies to it bounce, which is the documented behavior |
| 118 | (`docs/public/user/notifications.md` notes that reply-by-email |
| 119 | isn't supported). |
| 120 | |
| 121 | ### A5. Set up DNS for the app + docs subdomains |
| 122 | |
| 123 | Still in Namecheap **Advanced DNS** for shithub.sh. Add: |
| 124 | |
| 125 | - `A` record, Host `@`, Value `<APP-DROPLET-IP>` — **placeholder |
| 126 | for now**; we'll fill the real IP after creating the droplet |
| 127 | in Phase B. **Pin TTL to 1 min** for now. |
| 128 | - `A` record, Host `www`, Value `<APP-DROPLET-IP>` — same. |
| 129 | - `CNAME` record, Host `docs`, Value `shithub-docs.nyc3.cdn.digitaloceanspaces.com.` |
| 130 | (Spaces CDN — we'll create the bucket in Phase B; the CNAME |
| 131 | resolves once the bucket exists.) The trailing dot matters. |
| 132 | |
| 133 | Skip the records you don't have IPs for yet; they go in after |
| 134 | Phase B step B3. |
| 135 | |
| 136 | ### A6. Telegram bot for alerts (skipped — week-1 email only) |
| 137 | |
| 138 | We're using email-only alerts for week 1. Email goes via |
| 139 | Postmark to your ops mailbox. When you flip to phone alerts: |
| 140 | follow `docs/internal/runbooks/incidents.md` and add a Telegram |
| 141 | bot — won't repeat that here. |
| 142 | |
| 143 | --- |
| 144 | |
| 145 | ## Phase B — DigitalOcean infrastructure |
| 146 | |
| 147 | ### B1. DO project + SSH key |
| 148 | |
| 149 | These are two unrelated resources — the project is a workspace |
| 150 | grouping, the SSH key is an account-wide credential. You'll |
| 151 | attach the key to each droplet at create time (Phase B3). |
| 152 | |
| 153 | **Project:** |
| 154 | |
| 155 | 1. <https://cloud.digitalocean.com> → **Projects** (left sidebar) |
| 156 | → **New Project**. |
| 157 | 2. Name: **shithub-prod**. Purpose: **Service or API**. |
| 158 | Environment: **Production**. |
| 159 | |
| 160 | **SSH key — easiest path: add it during droplet creation in B3.** |
| 161 | |
| 162 | DO's standalone "SSH Keys" settings page keeps moving around the |
| 163 | UI, so the most reliable place to add a new key is the droplet |
| 164 | create form itself: |
| 165 | |
| 166 | - In Phase B3 step 6 ("Authentication"), click **"+ New SSH Key"** |
| 167 | on the form. Paste `~/.ssh/id_ed25519.pub` from your laptop, |
| 168 | name it after the laptop ("macbook-pro"). The key gets saved |
| 169 | to your account permanently AND attached to this droplet. |
| 170 | - For droplets #2–4, the key is already in the list — just tick |
| 171 | its checkbox. |
| 172 | |
| 173 | If you want to add the key in advance via the standalone page: |
| 174 | use the dashboard's top search bar — type **"SSH"** — and it'll |
| 175 | surface the current location. (Path varies; "Settings → Security |
| 176 | → SSH Keys" used to work, but DO reorganises this page often.) |
| 177 | |
| 178 | **Verify:** the project shows under Projects. The SSH key |
| 179 | verification happens at droplet-create time when you tick the |
| 180 | box. |
| 181 | |
| 182 | ### B2. Create Spaces buckets (do these BEFORE droplets — the docs CNAME depends on the docs bucket existing) |
| 183 | |
| 184 | > **About this section.** DO's web UI for Spaces changes |
| 185 | > regularly (region availability, form layout, post-create |
| 186 | > settings paths). This section describes **what each bucket |
| 187 | > needs to be**, not where to click. Find the create form via |
| 188 | > the dashboard's left sidebar (**Spaces Object Storage** at the |
| 189 | > time of writing) or the top search bar — type "Spaces". For |
| 190 | > a UI-free path, see Phase B0 (`provision-do.sh`) at the |
| 191 | > bottom of this guide. |
| 192 | |
| 193 | You need three buckets. **Storage type for all three: Standard.** |
| 194 | (Cold Storage has a 30-day minimum retention that surprise-bills |
| 195 | when our daily backups churn.) **First bucket triggers the $5/mo |
| 196 | Spaces subscription** which covers all three up to 250 GiB total |
| 197 | + 1000 GiB bandwidth. |
| 198 | |
| 199 | | # | Bucket name | Region | CDN | Notes | |
| 200 | |---|-----------------------|---------------------------------------------|---------|--------------------------------------------------------| |
| 201 | | 1 | `shithub-backups` | **Region A** — pick whichever DO offers (e.g. SFO3) | off | Primary backups (WAL + daily pg_dump). | |
| 202 | | 2 | `shithub-backups-dr` | **Region B** — DIFFERENT region from A | off | Cross-region DR mirror; pick anything other than A. | |
| 203 | | 3 | `shithub-docs` | **Same as A** | **on** | Docs site frontend; CDN serves `docs.shithub.sh`. | |
| 204 | |
| 205 | After all three exist: |
| 206 | |
| 207 | 1. **Assign the docs custom domain.** Go to the `shithub-docs` |
| 208 | bucket → its CDN settings (path varies; the create form notes |
| 209 | "you can assign a custom domain in CDN settings after the |
| 210 | Space is created"). Set custom domain to `docs.shithub.sh`. |
| 211 | DO will tell you the CNAME target it expects on your DNS; |
| 212 | match the value in Phase A5 to that. |
| 213 | 2. **Generate Spaces access keys.** Find the Spaces Keys |
| 214 | management page (left sidebar **API** section, or top search |
| 215 | bar → "Spaces Keys"). Generate a new key named |
| 216 | `shithub-prod-app`. **Copy the secret immediately** — only |
| 217 | shown once. |
| 218 | |
| 219 | **Verify:** three buckets listed under Spaces. Endpoint URL |
| 220 | follows `<bucket>.<region>.digitaloceanspaces.com`. The |
| 221 | inventory `s3_endpoint` field gets `<region>.digitaloceanspaces.com` |
| 222 | (no bucket name in front). |
| 223 | |
| 224 | **Project assignment:** if you haven't created a `shithub-prod` |
| 225 | project yet, put the buckets in any existing project for now — |
| 226 | they're trivially moved later via the dashboard. Project |
| 227 | membership is workspace-grouping, not access control. |
| 228 | |
| 229 | ### B3. Create the four droplets |
| 230 | |
| 231 | > **UI-stable description.** The DO droplet-create form changes |
| 232 | > field layouts every few quarters. This section describes the |
| 233 | > **shape each droplet needs to take**; locate the create form |
| 234 | > via the dashboard's **Droplets** sidebar entry or top search |
| 235 | > bar. |
| 236 | |
| 237 | Required for each droplet: |
| 238 | |
| 239 | - **Image:** Ubuntu 24.04 LTS x64 (Distributions tab; not Marketplace). |
| 240 | - **Region:** **same region as the primary Spaces bucket** (Region |
| 241 | A from B2). All four droplets in the same region keeps |
| 242 | intra-VPC traffic free. |
| 243 | - **VPC Network:** the default VPC in that region. **All four |
| 244 | droplets MUST be in the same VPC** — that's how they reach |
| 245 | each other over private IPs. |
| 246 | - **Authentication:** SSH Key. On the first droplet, click |
| 247 | "+ New SSH Key" and paste your laptop's `~/.ssh/id_ed25519.pub` |
| 248 | (this saves it to your account). On droplets #2–4, just tick |
| 249 | the same key. |
| 250 | - **DO Backups:** **off** (our own backup pipeline runs). |
| 251 | - **DO Monitoring:** **on** (free agent, useful baseline metrics). |
| 252 | - **Tags:** `shithub` plus a per-role tag (e.g., `shithub-app`). |
| 253 | - **Project:** the project you're putting everything in. |
| 254 | |
| 255 | Per-droplet variations: |
| 256 | |
| 257 | | # | Hostname | Size (DO slug) | Cost/mo | Per-role tag | |
| 258 | |---|-----------------------|---------------------|---------|------------------------| |
| 259 | | 1 | `shithub-app` | s-2vcpu-4gb | $24 | `shithub-app` | |
| 260 | | 2 | `shithub-db` | s-2vcpu-4gb | $24 | `shithub-db` | |
| 261 | | 3 | `shithub-backup` | s-1vcpu-2gb | $12 | `shithub-backup` | |
| 262 | | 4 | `shithub-monitoring` | s-2vcpu-4gb | $24 | `shithub-monitoring` | |
| 263 | |
| 264 | Size selection: in the create form, look for **Basic** plan → |
| 265 | **Regular SSD** → the size matrix. The slug names above are |
| 266 | DO's API identifiers and appear under each tile in the form. |
| 267 | |
| 268 | Repeat for droplets #2–#4 with these differences: |
| 269 | |
| 270 | | # | Hostname | Size | Tags | |
| 271 | |---|--------------------|----------------------|----------------------------| |
| 272 | | 2 | `shithub-db` | s-2vcpu-4gb ($24/mo) | `shithub`, `shithub-db` | |
| 273 | | 3 | `shithub-backup` | s-1vcpu-2gb ($12/mo) | `shithub`, `shithub-backup`| |
| 274 | | 4 | `shithub-monitoring`| s-2vcpu-4gb ($24/mo)| `shithub`, `shithub-monitoring` | |
| 275 | |
| 276 | **Capture the IPs.** For each droplet, write down both the |
| 277 | **public IPv4** (for SSH from your laptop, and for `shithub-app` |
| 278 | to point DNS at) and the **private IPv4** (for inter-droplet |
| 279 | traffic — appears under "Private IPv4" in the droplet detail). |
| 280 | |
| 281 | **Now go back to Phase A5** and update the `A` records for `@` |
| 282 | and `www` to point at `shithub-app`'s public IPv4. The CNAME |
| 283 | for `docs` you set already; it will start resolving once |
| 284 | shithub-docs CDN is ready (~1 min). |
| 285 | |
| 286 | ### B4. Create + attach the block volume |
| 287 | |
| 288 | 1. Left sidebar → **Volumes** → **Create Volume**. |
| 289 | 2. **Region:** NYC3. |
| 290 | 3. **Size:** **100 GB** ($10/mo). |
| 291 | 4. **Filesystem format:** Ext4. **Mount options:** automatic. |
| 292 | 5. **Attach to droplet:** `shithub-app`. |
| 293 | 6. **Mount point:** **`/mnt/shithub_data`** (DO's default for the |
| 294 | volume name). |
| 295 | 7. Create + attach. |
| 296 | |
| 297 | **Verify (after SSHing in B5):** |
| 298 | ```sh |
| 299 | df -h /mnt/shithub_data # should show ~100 GB Ext4 mounted |
| 300 | ``` |
| 301 | |
| 302 | We'll move the mount to `/data` (where the playbook expects it) |
| 303 | during Phase C bind-mount step. |
| 304 | |
| 305 | ### B5. SSH-bootstrap — confirm you can reach all four droplets |
| 306 | |
| 307 | From your laptop: |
| 308 | |
| 309 | ```sh |
| 310 | # Replace IPs with the public IPv4 of each droplet. |
| 311 | ssh root@<APP-IP> # shithub-app |
| 312 | ssh root@<DB-IP> # shithub-db |
| 313 | ssh root@<BACKUP-IP> # shithub-backup |
| 314 | ssh root@<MONITORING-IP> # shithub-monitoring |
| 315 | ``` |
| 316 | |
| 317 | If `Permission denied (publickey)`, your SSH key didn't get |
| 318 | attached at create time; add it via DO console (Droplet → |
| 319 | Access → Reset Root Password is the only fallback that works |
| 320 | without an existing key). |
| 321 | |
| 322 | **Verify:** `whoami` returns `root` on each. |
| 323 | |
| 324 | ### B6. Bootstrap inter-droplet SSH |
| 325 | |
| 326 | Ansible will use shithub-app as its control node (we install it |
| 327 | there in Phase C). For Ansible to reach the other three over |
| 328 | the private network, shithub-app needs an SSH key authorized on |
| 329 | each. |
| 330 | |
| 331 | On **shithub-app**: |
| 332 | |
| 333 | ```sh |
| 334 | ssh-keygen -t ed25519 -f /root/.ssh/id_ed25519 -N "" |
| 335 | cat /root/.ssh/id_ed25519.pub |
| 336 | ``` |
| 337 | |
| 338 | Copy that public key. On **shithub-db**, **shithub-backup**, and |
| 339 | **shithub-monitoring**: |
| 340 | |
| 341 | ```sh |
| 342 | mkdir -p /root/.ssh |
| 343 | cat >> /root/.ssh/authorized_keys <<'EOF' |
| 344 | <paste shithub-app's pubkey here> |
| 345 | EOF |
| 346 | chmod 600 /root/.ssh/authorized_keys |
| 347 | ``` |
| 348 | |
| 349 | **Verify (from shithub-app):** |
| 350 | ```sh |
| 351 | ssh root@<DB-PRIVATE-IP> hostname # → shithub-db |
| 352 | ssh root@<BACKUP-PRIVATE-IP> hostname # → shithub-backup |
| 353 | ssh root@<MONITORING-PRIVATE-IP> hostname # → shithub-monitoring |
| 354 | ``` |
| 355 | |
| 356 | If asked about host key fingerprints, say yes. |
| 357 | |
| 358 | --- |
| 359 | |
| 360 | ## Phase C — Hand Claude Code the keyboard |
| 361 | |
| 362 | ### C1. Install Claude Code on shithub-app |
| 363 | |
| 364 | On **shithub-app**: |
| 365 | |
| 366 | ```sh |
| 367 | apt-get update |
| 368 | apt-get install -y curl ca-certificates |
| 369 | curl -fsSL https://claude.ai/install.sh | sh |
| 370 | ``` |
| 371 | |
| 372 | Then run `claude` and authenticate with the same Anthropic |
| 373 | account you're using on your laptop (browser flow on your |
| 374 | laptop, paste the code into the SSH terminal). |
| 375 | |
| 376 | ### C2. Install build dependencies on shithub-app |
| 377 | |
| 378 | These are needed to build the `shithubd` binary on the droplet: |
| 379 | |
| 380 | ```sh |
| 381 | apt-get install -y \ |
| 382 | git make build-essential \ |
| 383 | ansible \ |
| 384 | golang-go |
| 385 | # Verify Go ≥ 1.22 (the project targets 1.26). |
| 386 | go version |
| 387 | ``` |
| 388 | |
| 389 | If the apt Go is too old (Ubuntu 24.04 ships ~1.22 which is |
| 390 | fine), skip the next part. If you need a newer Go, install via |
| 391 | the tarball method: |
| 392 | |
| 393 | ```sh |
| 394 | curl -LO https://go.dev/dl/go1.22.5.linux-amd64.tar.gz |
| 395 | rm -rf /usr/local/go |
| 396 | tar -C /usr/local -xzf go1.22.5.linux-amd64.tar.gz |
| 397 | echo 'export PATH=$PATH:/usr/local/go/bin' >> /root/.bashrc |
| 398 | source /root/.bashrc |
| 399 | go version |
| 400 | ``` |
| 401 | |
| 402 | ### C3. Clone the source |
| 403 | |
| 404 | ```sh |
| 405 | mkdir -p /root/src && cd /root/src |
| 406 | git clone https://github.com/tenseleyFlow/shithub.git |
| 407 | cd shithub |
| 408 | git log --oneline -3 # confirm latest commit |
| 409 | ``` |
| 410 | |
| 411 | ### C4. Move the volume mount to /data |
| 412 | |
| 413 | The Ansible playbook expects `/data` as the data root. We could |
| 414 | edit the inventory, but it's cleaner to bind-mount the |
| 415 | DO-attached volume to `/data`: |
| 416 | |
| 417 | ```sh |
| 418 | mkdir -p /data |
| 419 | mount --bind /mnt/shithub_data /data |
| 420 | echo '/mnt/shithub_data /data none bind 0 0' >> /etc/fstab |
| 421 | |
| 422 | # Verify the bind mount survives the next reboot test: |
| 423 | mount | grep -E '/(data|mnt/shithub_data)' |
| 424 | ``` |
| 425 | |
| 426 | ### C5. Hand off to Claude Code |
| 427 | |
| 428 | From your SSH session on shithub-app, run: |
| 429 | |
| 430 | ```sh |
| 431 | cd /root/src/shithub |
| 432 | claude |
| 433 | ``` |
| 434 | |
| 435 | When Claude is up, paste this priming message: |
| 436 | |
| 437 | > This is the shithub deploy you've been planning. The repo is at |
| 438 | > github.com/tenseleyFlow/shithub. You wrote the sprint specs at |
| 439 | > .docs/sprints/, especially S37-deploy.md and S40-launch.md. |
| 440 | > We are at Phase D of `deploy/cutover/SETUP-GUIDE.md` — please |
| 441 | > read that file and the prerequisite docs, then walk me through |
| 442 | > Phase D with my hands on the keyboard. Domain is shithub.sh. |
| 443 | > Postmark is set up. Spaces buckets are: shithub-prod (NYC3, |
| 444 | > private), shithub-prod-dr (SFO3, private), shithub-docs (NYC3, |
| 445 | > public). Volume bind-mounted at /data. Other droplets reachable |
| 446 | > via private IPs. |
| 447 | |
| 448 | Claude will pick up from there. **The rest of this guide is |
| 449 | written for Claude (or you, if you'd rather drive yourself).** |
| 450 | |
| 451 | --- |
| 452 | |
| 453 | ## Phase D — Inventory + secrets |
| 454 | |
| 455 | ### D1. Copy the production inventory template |
| 456 | |
| 457 | ```sh |
| 458 | cd /root/src/shithub |
| 459 | cp deploy/ansible/inventory/production.example deploy/ansible/inventory/production |
| 460 | ``` |
| 461 | |
| 462 | The bare name `production` is gitignored so secrets stay out of |
| 463 | the repo. |
| 464 | |
| 465 | ### D2. Generate the cryptographic secrets |
| 466 | |
| 467 | ```sh |
| 468 | # Session signing key (cookie MAC). |
| 469 | openssl rand -base64 32 > /tmp/session_key |
| 470 | # TOTP AEAD key (encrypts 2FA secrets at rest). |
| 471 | openssl rand -base64 32 > /tmp/totp_key |
| 472 | # Postgres passwords. |
| 473 | openssl rand -base64 24 > /tmp/db_password |
| 474 | openssl rand -base64 24 > /tmp/hook_password |
| 475 | # WireGuard private keys (one per host). |
| 476 | for h in app db backup monitoring; do |
| 477 | wg genkey > /tmp/wg_${h}.key |
| 478 | wg pubkey < /tmp/wg_${h}.key > /tmp/wg_${h}.pub |
| 479 | done |
| 480 | ``` |
| 481 | |
| 482 | ### D3. Fill in the inventory |
| 483 | |
| 484 | ```sh |
| 485 | $EDITOR deploy/ansible/inventory/production |
| 486 | ``` |
| 487 | |
| 488 | Fill in: |
| 489 | - `app_host`, `db_host`, `backup_host`, `monitoring_host` — the |
| 490 | **private IPv4** of each droplet. |
| 491 | - `domain: shithub.sh` |
| 492 | - `caddy_email: ops@shithub.sh` (Let's Encrypt notifications) |
| 493 | - `db_password`, `hook_password` — paste from `/tmp/`. |
| 494 | - `session_key`, `totp_key` — paste from `/tmp/`. |
| 495 | - `s3_endpoint: nyc3.digitaloceanspaces.com` (DigitalOcean Spaces via |
| 496 | the S3-compatible API) |
| 497 | - `s3_region: us-east-1` (Spaces uses this for SigV4) |
| 498 | - `s3_bucket: shithub-prod` |
| 499 | - `s3_access_key_id`, `s3_secret_access_key` — from B2 step 5. |
| 500 | - `email_backend: postmark` |
| 501 | - `postmark_server_token` — from A4. |
| 502 | - `email_from: shithub <noreply@shithub.sh>` |
| 503 | - `auth_base_url: https://shithub.sh` |
| 504 | - WireGuard peer keys from `/tmp/wg_*.{key,pub}`. |
| 505 | |
| 506 | After filling in: |
| 507 | |
| 508 | ```sh |
| 509 | chmod 600 deploy/ansible/inventory/production |
| 510 | shred -u /tmp/session_key /tmp/totp_key /tmp/db_password \ |
| 511 | /tmp/hook_password /tmp/wg_*.key |
| 512 | ``` |
| 513 | |
| 514 | (Keep the public WireGuard keys in case you need to add a peer |
| 515 | later; the private keys are now only in the inventory.) |
| 516 | |
| 517 | --- |
| 518 | |
| 519 | ## Phase E — Deploy |
| 520 | |
| 521 | ### E1. Dry-run |
| 522 | |
| 523 | ```sh |
| 524 | cd /root/src/shithub |
| 525 | make deploy-check ANSIBLE_INVENTORY=production |
| 526 | ``` |
| 527 | |
| 528 | Read the diff. Expect every host to be `changed`. If any host |
| 529 | shows `unreachable`, the SSH bootstrap from B6 missed a droplet. |
| 530 | |
| 531 | ### E2. Build the binary with the version stamp |
| 532 | |
| 533 | We're not cutting the v0.1.0 tag yet — that's a launch-day |
| 534 | ceremony. For this first deploy, the binary will stamp the |
| 535 | short commit + build time, which is fine; the soft-launch |
| 536 | window catches surprises before we tag. |
| 537 | |
| 538 | ```sh |
| 539 | make build |
| 540 | ./bin/shithubd version |
| 541 | # expect: |
| 542 | # Version: <short-commit-or-dev> |
| 543 | # Commit: <short-commit> |
| 544 | # Built: <today, UTC> |
| 545 | ``` |
| 546 | |
| 547 | ### E3. Apply |
| 548 | |
| 549 | ```sh |
| 550 | make deploy ANSIBLE_INVENTORY=production |
| 551 | ``` |
| 552 | |
| 553 | Expect ~15–30 min on the first run. The roles run in this order: |
| 554 | **base → postgres → shithubd → caddy → wireguard → backup → |
| 555 | monitoring-client.** Caddy will obtain real Let's Encrypt certs |
| 556 | on first request; `caddy_use_acme_staging` should be `false` in |
| 557 | the inventory so we get a real cert. |
| 558 | |
| 559 | If a role fails, **stop**. Re-running with `--limit` and the |
| 560 | specific role tag is the surgical path. Read the journal of the |
| 561 | failing service before retrying: |
| 562 | ```sh |
| 563 | journalctl -u <service> -n 200 |
| 564 | ``` |
| 565 | |
| 566 | ### E4. Bootstrap the admin (you) |
| 567 | |
| 568 | ```sh |
| 569 | ssh root@<APP-PRIVATE-IP> |
| 570 | sudo -u shithub /usr/local/bin/shithubd admin bootstrap-admin \ |
| 571 | --email you@your-current-email.com |
| 572 | ``` |
| 573 | |
| 574 | The CLI prints a one-time password-reset link. Open it in a |
| 575 | browser, set a password, **immediately enable 2FA**. |
| 576 | |
| 577 | ### E5. Smoke |
| 578 | |
| 579 | ```sh |
| 580 | deploy/cutover/smoke.sh https://shithub.sh |
| 581 | ``` |
| 582 | |
| 583 | If everything is green, the soft launch is live. Signups are |
| 584 | still gated (we set `SHITHUB_AUTH__SIGNUP_DISABLED=true` in the |
| 585 | inventory by default for soft launch); you're the only user. |
| 586 | |
| 587 | --- |
| 588 | |
| 589 | ## Phase F — Soft launch (24–48h) |
| 590 | |
| 591 | You're now using shithub yourself. Things to do: |
| 592 | |
| 593 | 1. **Push the project's source.** Create the `shithub` org, |
| 594 | create the `shithub` repo under it, push from this checkout. |
| 595 | 2. **Walk every flow.** Signup (toggle the gate, verify, gate |
| 596 | again), password reset, 2FA, SSH key add (the SSH transport |
| 597 | isn't shipped, so this just stores the key — fine), PAT |
| 598 | create, repo create, push, issue, PR, review, merge, search. |
| 599 | 3. **Note every rough edge.** File issues against |
| 600 | `shithub/shithub` itself. |
| 601 | 4. **Watch the dashboards.** Grafana on the monitoring host. |
| 602 | 5. **First daily backup runs at the next 03:00 UTC.** Confirm |
| 603 | it landed in Spaces the morning after. |
| 604 | 6. **First restore drill.** When the second daily backup |
| 605 | exists, run the drill. |
| 606 | |
| 607 | Day 2 morning: you have a real instance with real data + a real |
| 608 | backup. You're as ready as you'll ever be. |
| 609 | |
| 610 | --- |
| 611 | |
| 612 | ## Phase G — Public launch |
| 613 | |
| 614 | ### G1. Tag v0.1.0 |
| 615 | |
| 616 | ```sh |
| 617 | cd /root/src/shithub |
| 618 | git tag -a v0.1.0 -m "v0.1.0 — initial public release" |
| 619 | git push origin v0.1.0 |
| 620 | ``` |
| 621 | |
| 622 | ### G2. Build and re-deploy with the tagged version |
| 623 | |
| 624 | ```sh |
| 625 | git fetch --tags |
| 626 | git checkout v0.1.0 |
| 627 | make build |
| 628 | ./bin/shithubd version # Version: v0.1.0 |
| 629 | make deploy ANSIBLE_INVENTORY=production |
| 630 | ``` |
| 631 | |
| 632 | Now the home page's Version field reads `v0.1.0`. Verify in a |
| 633 | browser. |
| 634 | |
| 635 | ### G3. Open signups |
| 636 | |
| 637 | Edit the inventory: |
| 638 | ```sh |
| 639 | $EDITOR deploy/ansible/inventory/production |
| 640 | # Change: signup_disabled: false |
| 641 | make deploy ANSIBLE_INVENTORY=production ANSIBLE_TAGS=app |
| 642 | ``` |
| 643 | |
| 644 | ### G4. Update the status page |
| 645 | |
| 646 | ```sh |
| 647 | $EDITOR docs/public/status.md |
| 648 | # Update timestamp + "All systems normal." |
| 649 | make docs # local mdBook build |
| 650 | deploy/docs-site/sync-to-spaces.sh |
| 651 | ``` |
| 652 | |
| 653 | ### G5. Post the announcement |
| 654 | |
| 655 | `docs/blog/v0.1.0-launch.md` is the copy. Submit to: |
| 656 | - Hacker News |
| 657 | - /r/programming, /r/selfhosted |
| 658 | - lobste.rs |
| 659 | - Mastodon |
| 660 | |
| 661 | You're live. Read `docs/internal/runbooks/day-one.md` next. |
| 662 | |
| 663 | --- |
| 664 | |
| 665 | ## When something goes wrong |
| 666 | |
| 667 | - **Caddy won't get a cert.** DNS hasn't fully propagated, or |
| 668 | port 80 is blocked. Check both. Last resort: flip |
| 669 | `caddy_use_acme_staging=true`, redeploy, get the staging cert |
| 670 | to confirm everything else works, then flip back when DNS is |
| 671 | good. |
| 672 | - **`/readyz` returns 503.** DB or storage unreachable. Check |
| 673 | `journalctl -u shithubd-web` for the specific error. |
| 674 | - **Postmark says emails go through but they don't arrive.** |
| 675 | Check spam, then check Postmark's Activity tab — every send |
| 676 | is logged. DKIM may not be propagated yet; first 24h is rough |
| 677 | on cold-start deliverability. |
| 678 | - **Ansible reports "host unreachable" mid-run.** SSH from |
| 679 | shithub-app to the failing host's private IP; if that doesn't |
| 680 | work, you're missing the key in B6. |
| 681 | |
| 682 | Anything else: read `docs/internal/runbooks/incidents.md` and |
| 683 | the `troubleshooting.md` in self-host docs. |