markdown · 5067 bytes Raw Blame History

Deploying with Ansible

The reference install path is the Ansible playbook in deploy/ansible/. From a fresh Ubuntu 24.04 box the workflow is: inventory → make deploy → bootstrap an admin → done.

1. Inventory

Copy the example inventory:

cp deploy/ansible/inventory/staging.example deploy/ansible/inventory/staging
$EDITOR deploy/ansible/inventory/staging

Variables marked # REQUIRED in the example come from your secret store (Bitwarden, 1Password, ansible-vault). Do not commit a real inventory file. The repo's .gitignore excludes the bare names staging and production to make this obvious.

Critical variables:

  • db_password — Postgres password for the shithub role.
  • hook_password — Postgres password for the shithub_hook role.
  • session_key — base64 32-byte AEAD key. Generate with openssl rand -base64 32.
  • totp_key — base64 32-byte AEAD key for at-rest TOTP secrets.
  • s3_* — bucket name, region, credentials.
  • email_* — Postmark token or SMTP creds.
  • wireguard_* — peer keys for the monitoring mesh.

2. Dry-run

make deploy-check ANSIBLE_INVENTORY=staging

Reports every change Ansible would make without making it. Read the diff before continuing.

3. Apply

make deploy ANSIBLE_INVENTORY=staging

The playbook is idempotent; a second run should report changed=0. If it doesn't, that's a config drift bug — open an issue.

The roles in order:

  • base — apt baseline, ufw default-deny, system users (shithub, shithub-ssh), data root.
  • postgres — installs PG16, initdb's /data/pgdata, applies our postgresql.conf and pg_hba.conf, wires the WAL archive command, creates shithub and shithub_hook roles with exact-grant permissions.
  • shithubd — copies the binary into /usr/local/bin, drops env files into /etc/shithub/, installs the three systemd units. The web service's ExecStartPre= runs shithubd migrate up so deploys with new schema are one command.
  • caddy — installs Caddy + the templated Caddyfile. Auto- TLS via Let's Encrypt staging until you flip caddy_use_acme_ staging=false.
  • wireguard — peers each host into the 10.50.0.0/24 mesh.
  • backup — installs the daily backup timer on the db host and the cross-region sync timer on the backup host.
  • monitoring-client — node-exporter + promtail on every host pointing at the monitoring host.

4. Bootstrap the first admin

The first deploy creates no users. SSH to the web host and run:

sudo -u shithub /usr/local/bin/shithubd admin bootstrap-admin --email you@example.com

The CLI creates a user (or grants site-admin to an existing one) and prints a one-time password-reset link. Open it in a browser, set a password, enable 2FA. Subsequent admin grants happen through /admin/users/{id}.

5. Smoke

  • https://shithub.sh/ — Caddy serves the home page.
  • https://shithub.sh/-/health — returns 200 OK with the build version.
  • Sign in as the bootstrap admin. Create a test repo. Push to it.
  • https://shithub.sh/admin/ — admin dashboard renders.

6. Production

Same procedure with the production inventory:

make deploy ANSIBLE_INVENTORY=production

For larger deploys, target a single host first:

make deploy ANSIBLE_INVENTORY=production ANSIBLE_LIMIT=web-02

Postgres

The postgres role does the heavy lifting; nothing operator- visible should be needed. Specific things to be aware of:

  • Archive command. deploy/postgres/archive_command.sh ships every WAL segment to spaces-prod:shithub-wal. Postgres refuses to recycle a segment until the script reports success; a failing archiver therefore fills the disk. Alert on pg_stat_archiver.failed_count > 0 (the default rule does).
  • Hook role. A separate Postgres role shithub_hook is used by the shithubd hook … subcommands. It has minimal grants — see deploy/postgres/hook-role-grants.sql. If you add a new hook subcommand that touches a new table, update those grants.

Caddy

The Caddyfile (deploy/Caddyfile.j2) reverse-proxies 127.0.0.1:8080. Two special path patterns:

  • /_/<owner>/<repo>.git/(info/refs|git-(upload|receive)-pack) uses a 30-minute timeout to accommodate large pushes / clones.
  • Static assets under /static/ get aggressive cache headers.

Access logs are JSON to /var/log/caddy/access.log; promtail ships them to Loki.

sshd

The provided sshd_config.j2 is conservative: PasswordAuthentication off, PubkeyAuthentication on, X11/agent/TCP forwarding off. The Match User git block uses the AKC contract — see Authentication / SSH on the user side.

Rolling forward

git fetch --tags
git checkout v<new-version>
make deploy ANSIBLE_INVENTORY=production

That's it. The migrate up is part of the unit's startup preflight; if a migration fails, the unit stays in activating and the journal has the error. See Upgrades for the major-version checklist.

View source
1 # Deploying with Ansible
2
3 The reference install path is the Ansible playbook in
4 `deploy/ansible/`. From a fresh Ubuntu 24.04 box the workflow is:
5 inventory → `make deploy` → bootstrap an admin → done.
6
7 ## 1. Inventory
8
9 Copy the example inventory:
10
11 ```sh
12 cp deploy/ansible/inventory/staging.example deploy/ansible/inventory/staging
13 $EDITOR deploy/ansible/inventory/staging
14 ```
15
16 Variables marked `# REQUIRED` in the example come from your
17 secret store (Bitwarden, 1Password, ansible-vault). **Do not**
18 commit a real inventory file. The repo's `.gitignore` excludes
19 the bare names `staging` and `production` to make this obvious.
20
21 Critical variables:
22
23 - `db_password` — Postgres password for the `shithub` role.
24 - `hook_password` — Postgres password for the `shithub_hook`
25 role.
26 - `session_key` — base64 32-byte AEAD key. Generate with
27 `openssl rand -base64 32`.
28 - `totp_key` — base64 32-byte AEAD key for at-rest TOTP secrets.
29 - `s3_*` — bucket name, region, credentials.
30 - `email_*` — Postmark token or SMTP creds.
31 - `wireguard_*` — peer keys for the monitoring mesh.
32
33 ## 2. Dry-run
34
35 ```sh
36 make deploy-check ANSIBLE_INVENTORY=staging
37 ```
38
39 Reports every change Ansible would make without making it. Read
40 the diff before continuing.
41
42 ## 3. Apply
43
44 ```sh
45 make deploy ANSIBLE_INVENTORY=staging
46 ```
47
48 The playbook is idempotent; a second run should report
49 `changed=0`. If it doesn't, that's a config drift bug — open an
50 issue.
51
52 The roles in order:
53
54 - **base** — apt baseline, ufw default-deny, system users (`shithub`,
55 `shithub-ssh`), data root.
56 - **postgres** — installs PG16, initdb's `/data/pgdata`, applies
57 our `postgresql.conf` and `pg_hba.conf`, wires the WAL archive
58 command, creates `shithub` and `shithub_hook` roles with
59 exact-grant permissions.
60 - **shithubd** — copies the binary into `/usr/local/bin`, drops
61 env files into `/etc/shithub/`, installs the three systemd
62 units. The web service's `ExecStartPre=` runs `shithubd migrate
63 up` so deploys with new schema are one command.
64 - **caddy** — installs Caddy + the templated `Caddyfile`. Auto-
65 TLS via Let's Encrypt staging until you flip `caddy_use_acme_
66 staging=false`.
67 - **wireguard** — peers each host into the 10.50.0.0/24 mesh.
68 - **backup** — installs the daily backup timer on the db host
69 and the cross-region sync timer on the backup host.
70 - **monitoring-client** — node-exporter + promtail on every
71 host pointing at the monitoring host.
72
73 ## 4. Bootstrap the first admin
74
75 The first deploy creates no users. SSH to the web host and run:
76
77 ```sh
78 sudo -u shithub /usr/local/bin/shithubd admin bootstrap-admin --email you@example.com
79 ```
80
81 The CLI creates a user (or grants site-admin to an existing one)
82 and prints a one-time password-reset link. Open it in a browser,
83 set a password, enable 2FA. Subsequent admin grants happen
84 through `/admin/users/{id}`.
85
86 ## 5. Smoke
87
88 - `https://shithub.sh/` — Caddy serves the home page.
89 - `https://shithub.sh/-/health` — returns `200 OK` with the
90 build version.
91 - Sign in as the bootstrap admin. Create a test repo. Push to it.
92 - `https://shithub.sh/admin/` — admin dashboard renders.
93
94 ## 6. Production
95
96 Same procedure with the production inventory:
97
98 ```sh
99 make deploy ANSIBLE_INVENTORY=production
100 ```
101
102 For larger deploys, target a single host first:
103
104 ```sh
105 make deploy ANSIBLE_INVENTORY=production ANSIBLE_LIMIT=web-02
106 ```
107
108 ## Postgres
109
110 The `postgres` role does the heavy lifting; nothing operator-
111 visible should be needed. Specific things to be aware of:
112
113 - **Archive command.** `deploy/postgres/archive_command.sh` ships
114 every WAL segment to `spaces-prod:shithub-wal`. Postgres
115 refuses to recycle a segment until the script reports success;
116 a failing archiver therefore fills the disk. Alert on
117 `pg_stat_archiver.failed_count > 0` (the default rule does).
118 - **Hook role.** A separate Postgres role `shithub_hook` is used
119 by the `shithubd hook …` subcommands. It has minimal grants —
120 see `deploy/postgres/hook-role-grants.sql`. If you add a new
121 hook subcommand that touches a new table, update those grants.
122
123 ## Caddy
124
125 The Caddyfile (`deploy/Caddyfile.j2`) reverse-proxies
126 `127.0.0.1:8080`. Two special path patterns:
127
128 - `/_/<owner>/<repo>.git/(info/refs|git-(upload|receive)-pack)`
129 uses a 30-minute timeout to accommodate large pushes / clones.
130 - Static assets under `/static/` get aggressive cache headers.
131
132 Access logs are JSON to `/var/log/caddy/access.log`; promtail
133 ships them to Loki.
134
135 ## sshd
136
137 The provided `sshd_config.j2` is conservative: PasswordAuthentication
138 off, PubkeyAuthentication on, X11/agent/TCP forwarding off. The
139 `Match User git` block uses the AKC contract — see
140 [Authentication / SSH on the user side](../user/ssh.md).
141
142 ## Rolling forward
143
144 ```sh
145 git fetch --tags
146 git checkout v<new-version>
147 make deploy ANSIBLE_INVENTORY=production
148 ```
149
150 That's it. The `migrate up` is part of the unit's startup
151 preflight; if a migration fails, the unit stays in `activating`
152 and the journal has the error. See
153 [Upgrades](./upgrades.md) for the major-version checklist.