markdown · 33308 bytes Raw Blame History

Actions/CI — schema + workflow dialect (S41a)

The Actions/CI subsystem is shipping in eight sub-sprints (S41a through S41h, plus optional S41i Nix engine). This doc covers what S41a lays down: the SQL schema, the workflow YAML dialect, the expression evaluator, and the load-bearing taint contract every later sub-sprint depends on.

S41a is parser + schema only — no triggers, no runner, no UI. The goal is to land a frozen contract that S41b/c/d/e can build against without churning under them.

SQL schema

Actions migrations currently span 0042–0051, 0053, 0057, and 0060. Migration 0052 belongs to the repo source-remotes feature, 0054 belongs to push event protocol tracking, 0055 belongs to the social feed, 0056 belongs to user profile contribution settings, 0058 belongs to repo name reuse, and 0059 belongs to GitHub org imports.

# Table Purpose
0042 workflow_runs One row per triggered workflow execution
0043 workflow_jobs Jobs within a run (one row per jobs.<key>)
0044 workflow_steps Steps within a job (one row per steps[i])
0045 workflow_secrets Per-repo + per-org encrypted secrets
0046 workflow_runners Registered runners + runner_tokens
0047 workflow_step_log_chunks Hot-path append log buffer (concatenated to blob on finalize)
0048 workflow_artifacts Per-run artifact metadata (90-day default expiry)
0049 actions_variables Non-secret per-repo/org config (Forgejo parity)
0050 workflow_steps.step_with Parsed with: inputs for magic uses: aliases
0051 workflow_runs.trigger_event_id Trigger idempotency for retries/admin replays
0053 runner_jwt_used Single-use replay gate for runner job JWTs
0057 workflow_job_secret_masks Encrypted claim-time log mask snapshots per job
0060 Actions retention indexes Narrow cleanup indexes for terminal steps/runs

A few load-bearing choices, called out so they're easy to spot in a later schema diff:

  • workflow_runs.run_index — per-repo monotonic counter. Each repo gets #1, #2, … so URLs like /{owner}/{repo}/actions/runs/42 are stable and human-friendly. Crib from Forgejo's actions_run.index.
  • workflow_runs.version — optimistic-lock counter. Mutators bump-and-check rather than SELECT … FOR UPDATE. Required for S41g's race between a cancel request and a state transition.
  • workflow_runs.concurrency_group — the concurrency-slot key, resolved at trigger time from the workflow's concurrency.group: expression. S41g's slot manager keys off this column and runner claim blocks younger runs while an older same-group run still has a queued/running job without cancel_requested=true.
  • workflow_runs.parent_run_id — for re-runs. The new run references the original; the UI shows a "re-ran from #N" link.
  • workflow_jobs.runner_id — FK added in 0046 (after the runners table exists). Nullable until claimed.
  • workflow_steps has a CHECK constraint enforcing (run_command IS NOT NULL) <> (uses_alias IS NOT NULL) — exactly one of run: or uses:. The uses_alias column is further CHECK-constrained to the three magic aliases we accept in v1.
  • workflow_secrets owns its value as bytea ChaCha20Poly1305- sealed via internal/auth/secretbox. Key derivation uses cfg.Auth.TOTPKeyB64 (already an operator-managed root) + (owner, kind, name) salt so re-keying is per-row.
  • workflow_step_log_chunks.chunk is capped at 512 KB per row. The runner sends bigger payloads in pieces. (step_id, seq) is UNIQUE so duplicate sends are idempotent.
  • actions_variables — non-secret, plaintext, scoped exactly like secrets (per-repo or per-org, never both on the same row). Forgejo has the same split; we mirror it for parity.
  • runner_jwt_used — primary-keyed by JWT jti. Job endpoints insert into this table during auth; zero inserted rows means replay and the API returns 401. JWTs are HMAC-SHA256 and use an HKDF subkey derived from auth.totp_key_b64 with label actions-runner-jwt-v1.
  • workflow_job_secret_masks — one encrypted JSON array of exact secret values per claimed job. It snapshots the log scrub set at claim time, preventing a rotated or deleted secret from disappearing from server-side masking while the old value is still in a runner's job payload.

The version and run_index patterns are the two pieces I'd point out to a future maintainer first. Both are cheap to add now and miserable to retrofit later.

Workflow YAML dialect (v1)

We accept a strict subset of GitHub Actions YAML. The parser rejects unknown keys at parse time so workflow authors find their typos immediately instead of shipping a workflow that does nothing.

Top level

name: my-pipeline                         # optional human name
on: [push, pull_request]                  # or full-form (see below)
permissions: read-all                     # default if omitted
env: { GREETING: "hello" }                # workflow-level env
concurrency:                              # optional slot manager
  group: ${{ shithub.ref }}
  cancel-in-progress: true
jobs:
  <key>:                                  # 1+ entries
    runs-on: ubuntu-latest
    needs: [other-key]                    # optional dep edge
    if: ${{ shithub.actor == 'alice' }}   # optional gate
    timeout-minutes: 60                   # 1..4320, default 360
    permissions: { contents: read }       # narrow workflow perms
    env: { K: v }                         # job overlay
    steps:
      - name: ...
        id: ...
        if: ...
        run: echo hi                      # run XOR uses
        uses: actions/checkout@v4         # exactly one of three aliases
        working-directory: ...
        env: { ... }
        continue-on-error: false

Triggers (on:)

v1 supports four triggers — anything else is a parse error.

Trigger Surface
push branches:, tags:, paths: (include + !exclude semantics)
pull_request types: (opened/synchronize/reopened/...), branches:, paths:
schedule one or more - cron: <5-field-expr>
workflow_dispatch inputs: map (string/boolean/choice/environment)

uses: allowlist

Exactly three aliases are reserved at parse time, no exceptions:

Alias Parser status Runner status
actions/checkout@v4 accepted rejected until checkout support lands
shithub/upload-artifact@v1 accepted rejected until artifact upload lands
shithub/download-artifact@v1 accepted rejected until artifact download lands

Any other uses: value (community actions, Docker images, composite actions) is an Error-severity diagnostic. The marketplace problem is explicitly out of scope for v1; revisit only if a real demand exists and we have an answer for supply-chain trust.

The current Docker executor runs run: steps only. It fails a reserved uses: alias deliberately instead of pretending checkout/artifact semantics exist. This keeps the first end-to-end smoke path honest: run:-only workflows are executable now, while repository checkout and artifact transfer remain explicit follow-up work.

File-size + parser caps

  • 64 KB workflow file size cap (workflow.MaxWorkflowFileBytes). Files larger than this are rejected before YAML decode begins — defends against pathological inputs and gives operators a predictable upper bound on parser memory.
  • 100 anchors per document (workflow.MaxYAMLAliases) — the billion-laughs guard. yaml.v3 doesn't expose a direct knob; we count alias nodes during a tree walk and bail.

${{ github.* }} alias

The dialect is intentionally rebranded to ${{ shithub.* }}. Authors who paste GHA workflows in unmodified will see their ${{ github.* }} references continue to work because the evaluator rewrites path[0] from github to shithub at the top of evalRef before taint computation, dispatch, and error rendering.

The alias is intentionally scope-narrow: only fields that exist in our shithub.* namespace (run_id, sha, ref, actor, event) route through. GHA fields we don't expose in v1 — event_name, repository, run_number, workspace, etc. — error with the canonical unknown shithub field "X" message. Slightly confusing for a GHA-flavored author but keeps the v1 namespace surface tight.

The alias preserves the load-bearing taint flag: github.event.X taints exactly like shithub.event.X. TestEval_GithubAliasIsTainted pins this contract.

Migration to strict-compat (drop the alias entirely) later is a one-PR flip; moving the other direction is much harder.

This is a deliberate decision recorded in the campaign plan.

Expression evaluator

${{ … }} expressions are parsed into a tiny AST and evaluated by internal/actions/expr. The surface is intentionally minimal:

Allowed namespaces

Namespace Source Tainted?
secrets.X workflow_secrets no, but sensitive
vars.X actions_variables no (operator-controlled)
env.X workflow file no (workflow author's text)
shithub.run_id dispatch context no
shithub.sha dispatch context no
shithub.ref dispatch context no
shithub.actor dispatch context no (resolved username)
shithub.event.* trigger payload yes — always

runner.*, steps.*, needs.*, matrix.*, inputs.* are all parse-time errors. They're parked for v2 and the parser's allowlist-closed posture means a future PR can't widen this accidentally without a clearly visible diff.

Allowed functions

contains(haystack, needle), startsWith(s, prefix), endsWith(s, suffix), plus the four job-status predicates success(), failure(), cancelled(), always(). That's the whole list. fromJSON, hashFiles, toJSON, format, and friends are explicitly rejected — they each carry footgun risk (parser DoS, FS access, side-channel injection) that we don't want to take on in v1.

Missing-value semantics

Reference Missing → ?
secrets.NOT_BOUND error (loud — workflow won't run)
vars.MISSING empty string (GHA parity)
env.MISSING empty string (GHA parity)
shithub.event.deeply.missing null but still tainted

The "missing event path → null but tainted" case is a defence-in- depth choice: even if the path doesn't resolve, the result still came from the event payload, and we'd rather over-flag than under.

Taint contract — the load-bearing piece

This is the contract every later sub-sprint hangs off. Get it wrong and we have an injection-shaped hole in the runner.

Where the flag lives

The taint flag lives on expr.Value (the evaluator-produced value), not workflow.Value (the parser-produced value). Two different structs share the name Value because they live in different packages, but they have different jobs:

  • workflow.Value carries the raw source string the parser read out of the YAML (an env entry, a with: input, a concurrency group expression). At parse time we don't know what the ${{ … }} body will resolve to, so there's nothing to taint yet.
  • expr.Value is what the evaluator returns when it resolves a reference at runtime. This struct carries Tainted bool. The runner's exec layer (S41d) consumes that flag.

Pre-L5 the parser-side struct also had a Tainted bool field plus a Tainted() constructor — both unused, both confusing because they suggested two sources of truth. Dropped in S41a-L5 cleanup.

Propagation

Every expr.Value carries a Tainted bool. Set true iff the value transitively depends on shithub.event.*. Operators control secrets, vars, env, the rest of shithub.*. Authors control the workflow file. Only the event payload is attacker-controlled: a PR title, a commit message, a branch name from a fork. Those values must never be interpolated into a shell string.

Propagation rules:

  • Reading shithub.event.XTainted: true (always, including missing-path null results).
  • Reading secrets.XSensitive: true. Secrets are operator- controlled, so they are not tainted, but they must not appear in shell source strings or Docker argv.
  • Reading any other namespace → Tainted: false and Sensitive: false, except env.X preserves both flags of the resolved env value. This closes the escape where an event-derived or secret-derived value is first assigned to env and then interpolated through ${{ env.X }}.
  • Binary op (==, !=, &&, ||) → tainted or sensitive if either operand is.
  • Unary op (!) → tainted/sensitive iff its operand is.
  • Function call (contains, startsWith, endsWith) → tainted or sensitive if any argument is.

The runner consumes Tainted and Sensitive and refuses to interpolate either class into shell strings. Instead, those values are bound to runner-owned SHITHUB_INPUT_xx envvars and the shell source only references those placeholders. The author writes:

- run: echo "PR title was: ${{ shithub.event.pull_request.title }}"

The runner sees a tainted reference; it compiles the step to:

SHITHUB_INPUT_0="$user_pr_title" exec sh -c 'echo "PR title was: $SHITHUB_INPUT_0"'

…where $user_pr_title is set via Go's cmd.Env, never inserted into the shell source string or Docker CLI argv. Backticks, $(), ;, && — none of those work as command-injection vectors when the value reaches the shell as environment data instead of syntax.

The shared renderer lives in internal/runner/exec, so future engines consume the same injection boundary instead of reimplementing it. The runner claim payload includes workflow_runs.event_payload; without that field, the runner cannot evaluate and taint ${{ shithub.event.* }} references.

Tests for this contract live in internal/actions/expr/eval_test.go, internal/runner/exec/render_test.go, and internal/runner/engine/docker_test.go. Do not weaken them in a later PR without an audit-checkpoint review — they're explicitly load-bearing for S41e's threat model.

Runner log chunks pass through internal/runner/scrub before they are posted to the API. It masks exact secret values and preserves enough tail bytes between chunks to catch a secret split across chunk boundaries. S41e wires resolved workflow secrets into the runner claim payload and mask set, snapshots that mask set encrypted on the job, then applies the same exact-value scrub again in the runner API before persisting chunks. The server path also carries a possible secret-prefix tail from the prior persisted chunk, so a runner that bypasses client-side scrubbing cannot leak a secret by splitting it across adjacent log POSTs.

shithub.event payload schema (v1)

The event payload is the most user-facing part of the contract: once authors write workflows that template against shithub.event.X, schema changes are breaking. The v1 schema is pinned and labelled v1. Any addition is fine; renames and removals require a major bump.

The schema is enforced by typed constructors in the internal/actions/event package — one per trigger. S41b's pipeline calls these to build payloads; the function signatures pin the field set so adding a key requires editing the constructor in a visible diff. This is the same closed-door discipline as the expression evaluator's namespace allowlist.

Trigger Constructor Top-level keys
push event.Push ref, before, after, head_commit{message,id,author}
pull_request event.PullRequest action, number, pull_request{title,head{ref,sha},base{ref,sha},user{login}}
schedule event.Schedule (empty map — cron fired; cron expression is on the workflow_runs row)
workflow_dispatch event.WorkflowDispatch inputs{<name>: <stringified>}

Anything not in this table doesn't exist in v1. Accessing it returns null+tainted (the missing-path semantics above).

Adding a field: edit the constructor in internal/actions/event/, add a row to this doc, and update the corresponding *_FlowsThroughEvaluator test in event_test.go so the new path is exercised end-to-end. Reviewer-required note in the commit message — same standard as a new evaluator function.

Renaming or removing: that's a v1→v2 break. Don't.

Operator surface

shithubd admin actions parse <file> reads a workflow off disk, runs the parser, and dumps diagnostics + a canonical JSON rendering of the parsed AST. Useful for:

  • debugging "why is my workflow not picking up changes" reports
  • validating a workflow file before committing it
  • producing a stable AST snapshot for inclusion in bug reports

Exit codes:

Code Meaning
0 clean parse, no Error-severity diagnostics
1 file unreadable, oversized, or YAML malformed
2 parse produced Error-severity diagnostics

Other admin surfaces are scoped to later sub-sprints:

  • S41c: shithubd admin runner register --name <foo> issues a registration token + writes a row to workflow_runners.
  • S41g: POST /api/v1/jobs/{id}/cancel and the repository run-detail UI request cancellation. Running jobs flip cancel_requested; queued jobs are made terminal immediately.
  • S41g: POST /api/v1/runs/{id}/rerun and the repository run-detail UI re-run completed/cancelled runs. Re-runs read the workflow YAML from the original run's head_sha, create a fresh queued workflow_runs row, and set parent_run_id to the source run.
  • S41g: workflow-level concurrency.group is resolved at enqueue time against the trigger context (shithub.ref, shithub.sha, and shithub.event.*). With cancel-in-progress: true, enqueue requests cancellation for older active runs in the same group. Without it, runner claim leaves the younger run queued until the older run no longer has uncancelled queued/running jobs.
  • S41g: workflow:cleanup is a daily retention worker enqueued by shithubd-cron.service. Operators can run it manually with shithubd admin run-job workflow:cleanup.

Workflow concurrency (S41g)

concurrency.group is a workflow-level slot key. The parser stores the raw value, and internal/actions/concurrency evaluates ${{ ... }} fragments when the run is enqueued. The trigger-time context deliberately does not include secrets; event-derived values may be tainted but are safe here because the value is only used as a database key.

When a run enters a non-empty group:

  • cancel-in-progress: false leaves the new run queued behind older same-repo, same-group runs while those older runs still have queued/running jobs with cancel_requested=false.
  • cancel-in-progress: true requests cancellation on those older jobs. Queued jobs become terminal immediately; running jobs keep running with cancel_requested=true so the runner can kill the active container. Once every active older job is cancel-requested, the group is released for the newer run.

The runner claim query enforces the queueing rule, not the web handler or UI. This keeps heartbeat races honest: multiple runners can poll at the same time, but only jobs whose dependency and concurrency blockers are clear can be claimed.

Runner timeouts (S41g)

jobs.<key>.timeout-minutes is enforced by shithubd-runner as a whole-job deadline. The parser stores the value in workflow_jobs.timeout_minutes with the GitHub-compatible default of 360 minutes and a 1..4320 cap.

When the deadline expires, the Docker engine explicitly kills the active step container, emits a terminal step update with status=completed and conclusion=timed_out, and the runner reports the job itself as completed/timed_out. The server rolls the parent workflow run up to timed_out when all jobs are terminal. A timed-out step is not masked by continue-on-error; the job deadline always wins.

The runner API increments shithub_actions_step_timeouts_total the first time a step reaches conclusion=timed_out. Duplicate terminal step-status retries do not increment the counter again.

Retention cleanup (S41g)

workflow:cleanup applies the durable Actions retention contract in this order:

  1. Delete hot workflow_step_log_chunks for steps completed more than 7 days ago. Finalized logs already live in object storage.
  2. Delete expired workflow_artifacts rows after deleting their actions/runs/... blob objects. The row's expires_at value is authoritative so per-upload retention overrides keep working.
  3. Delete unpinned terminal workflow_runs older than 365 days. Child jobs, steps, artifacts, and consumed JWT rows cascade through FK ownership.
  4. Delete consumed runner_jwt_used rows whose JWT expiry is more than 30 days old. This preserves replay/audit evidence for recent jobs without letting the replay table grow forever.

The defaults can be overridden in the worker payload:

{"step_log_chunk_days":7,"run_days":365,"jwt_used_days":30,"artifact_batch":1000}

artifact_batch caps each object-delete page and may not exceed 10000. Negative values are poison-job errors. The worker exports shithub_actions_runs_pruned_total{kind} where kind is one of chunks, blobs, runs, or jwt_used.

Production object storage also needs provider-side lifecycle on the same prefix: deploy/spaces/actions-lifecycle.json expires actions/runs/ objects after 90 days and aborts stale multipart uploads after 2 days. Apply it with deploy/cutover/apply-actions-lifecycle.sh.

Trigger pipeline (S41b)

Three layers between a triggering event and a queued workflow_run:

caller (push_process / pulls.Create / pr_jobs.PRSynchronize / dispatch HTTP)
    │
    └─► worker.Enqueue(KindWorkflowTrigger, JobPayload)
            │
            └─► trigger.Handler picks up:
                  Discover .shithub/workflows/*.yml at HEAD SHA
                  Parse each (skip + log on Error diagnostics)
                  Match each against trigger.Event
                  Enqueue each match
                        │
                        └─► trigger.Enqueue (one tx):
                              INSERT workflow_runs (ON CONFLICT DO NOTHING)
                              INSERT workflow_jobs per parsed job
                              INSERT workflow_steps per parsed step
                              (commit)
                              checks.Create per job (post-tx, idempotent
                                via ExternalID 'workflow_run:<id>:job:<key>')

Idempotency on the triggering event

The robust pattern, not a UNIQUE on (repo_id, head_sha). Each caller constructs a stable trigger_event_id from its triggering event's identity:

Caller trigger_event_id format
push_process push:<push_event_id>
pulls.Create pr_opened:<pr_id>:<head_sha>
pr_jobs.PRSynchronize pr_synchronize:<pr_id>:<head_sha>
dispatch HTTP dispatch:<file>:<sha>:<8-byte-random-hex>
schedule sweep (S41b-2) schedule:<workflow_id>:<window_start_unix>

Migration 0051 adds workflow_runs.trigger_event_id (text NOT NULL DEFAULT '') with a partial UNIQUE on (repo_id, workflow_file, trigger_event_id) WHERE trigger_event_id <> ''. The trigger handler does INSERT … ON CONFLICT DO NOTHING so:

  • Worker retries (the same push_process replay) → no duplicate runs.
  • Admin replays via shithubd admin run-job workflow:trigger ... → no duplicate runs.
  • Re-runs explicitly construct a NEW trigger_event_id (rerun:<original_run_id>:<request_uuid>) and chain back via parent_run_id. History is preserved, no collision.

Each caller's collision-free namespace is short-lived and human-debuggable: a Postgres operator can grep workflow_runs.trigger_event_id to see exactly which triggering event produced a given run.

Filter evaluation

trigger.Match(workflow, event) is a pure function (no I/O, no DB). For each event kind:

  • push: branch vs tag classified from the ref; only the matching filter list applies (a branches: filter rejects tag pushes and vice versa). paths: (when set) requires at least one changed path to match. Empty filter = match-all.
  • pull_request: types: defaults to [opened, synchronize, reopened] when omitted (GHA parity). branches: applies to the base ref. paths: as for push.
  • schedule: requires the workflow to declare the cron expression that fired. The sweep is the source of truth for which cron fires; we just gate on declaration. Avoids interpreting cron semantics in two places.
  • workflow_dispatch: matches whenever the workflow declares on.workflow_dispatch.

Glob semantics in branches:/tags:/paths:: minimatch subset with * (single segment), ** (any), /** end-anchor (optional trailing path), **/ start-anchor, and !exclude (last-match-wins, exclusion-only list implies include-all).

Collaborator gate

Per the S41b spec's "external-PR support is parked" decision: PR triggers (both opened and synchronize) only fire when the PR's author is the repo's owning user. Conservative — drops legitimate non-owner collaborators in the org-repo case. Expanding the gate requires plumbing policy.Can into the worker context, which we defer to S41g where the lifecycle work touches that surface anyway.

Operator surface

  • POST /{owner}/{repo}/actions/workflows/{file}/dispatches Body: {"ref": "...", "inputs": {"key": "value"}} (both optional; ref defaults to the repo's default branch). Returns 204 No Content on success. Synchronous trigger.Enqueue (no discovery — file is named in the URL). Auth: requires repo write.
  • GET /{owner}/{repo}/actions.atom Returns the last 50 workflow runs as an Atom feed. Auth and visibility match the Actions tab (repo:read). Entries link to /{owner}/{repo}/actions/runs/{run_index} and include the workflow name/path, event, branch, short SHA, status, and conclusion.

Webhook events (S41h)

Actions emits webhook-facing domain events through notif.EmitTx on state transitions:

  • workflow_run, with payload.action set to queued, running, or completed (completed may carry conclusion:"cancelled").
  • workflow_job, with payload.action set to queued, running, completed, or cancelled.

Payloads are structural snapshots only. They include ids, run index, workflow path/name, head SHA/ref, event kind, status, conclusion, timestamps, job key/name/runner id, needs, timeout, and cancellation state. They deliberately exclude workflow_runs.event_payload, env, permissions, logs, runner JWTs, and secret values. This keeps the webhook surface stable without turning arbitrary workflow input into subscriber-facing data.

What S41b deliberately doesn't do

  • Run jobs. S41c adds runner claim/status APIs; S41d adds the actual shithubd-runner execution binary.
  • Schedule sweep. Cron-driven triggers split into S41b-2 to keep this PR reviewable; the trigger pipeline accepts schedule events, but no caller produces them yet. S41b-2 adds the sweep + the robfig/cron/v3 dep + shithubd-cron.service wiring.
  • External-PR triggers. Conservative collaborator gate above.

Secrets + variables settings surface (S41c)

S41c wires the previously schema-only workflow_secrets and actions_variables tables into repo/org settings.

Repository routes are gated through policy.ActionRepoSettingsActions (repo:settings:actions, admin role minimum):

  • GET /{owner}/{repo}/settings/secrets/actions
  • POST /{owner}/{repo}/settings/secrets/actions
  • POST /{owner}/{repo}/settings/secrets/actions/{name}/delete
  • GET /{owner}/{repo}/settings/variables/actions
  • POST /{owner}/{repo}/settings/variables/actions
  • POST /{owner}/{repo}/settings/variables/actions/{name}/delete

Organization routes follow the existing org-settings prefix and are owner-only:

  • GET /organizations/{org}/settings/secrets/actions
  • POST /organizations/{org}/settings/secrets/actions
  • POST /organizations/{org}/settings/secrets/actions/{name}/delete
  • GET /organizations/{org}/settings/variables/actions
  • POST /organizations/{org}/settings/variables/actions
  • POST /organizations/{org}/settings/variables/actions/{name}/delete

Secrets are sealed through internal/auth/secretbox using the operator-managed Auth.TOTPKeyB64 root key. Secret list pages render names/metadata only; the plaintext value is accepted once on create or rotation and never rendered back. Variables are non-secret plaintext configuration, so settings pages render their values. Both stores use the same name grammar as the database constraints: ^[A-Za-z_][A-Za-z0-9_]*$, 1-100 characters. Variables additionally enforce the 4096-character value cap in Go before hitting the DB constraint.

What S41a deliberately doesn't do

  • No trigger pipeline. domain_events aren't matched against on: yet — that's S41b.
  • No runner. S41c/S41d add runner claim APIs and the execution binary.
  • No UI. The Actions tab still renders the placeholder — S41f.
  • No secret encryption helpers wired to anything writable — S41c.
  • No JWT issuance, no runner registration flow — S41c.
  • No log streaming, no SSE — S41d/f.
  • No execution sandbox, no scrubbing, no injection guards enforced at the runner — S41d/e (the parser-side taint contract is the foundation those depend on, not a substitute).

Why these choices, in two paragraphs

The schema work is front-loaded so later sub-sprints don't ripple a migration through every PR. version (optimistic locking) and run_index (per-repo monotonic) are the two columns I'd flag to a new maintainer immediately — both are nearly free to add up front and painful to retrofit. The split between hot-path log chunks (Postgres) and finalized blob (Spaces) is shaped after Forgejo's log path; we pick the boring well-trodden answer over the clever one because log throughput is the failure mode that bites first.

The taint contract is the security-load-bearing piece. Every later sub-sprint trusts that the Tainted flag is set correctly here, in the parser/evaluator, and never re-derived downstream. The narrow allowlist of namespaces and functions exists exactly so a future PR that adds, say, fromJSON has to do it knowingly — by widening the allowlist in a visible diff, with a reviewer-required note, rather than by accident. The ${{ github.* }} alias is a pragmatic concession to copy-paste users; the rebrand to ${{ shithub.* }} is the canonical form so future divergence isn't awkward.

See also

  • internal/actions/workflow/parse.go — the parser
  • internal/actions/expr/eval.go — the evaluator
  • internal/migrationsfs/migrations/0042..0049_*.sql — the schema
  • tests/fixtures/workflows/*.yml — canonical input shapes
  • internal/actions/workflow/parse_test.go — fixture-driven tests
  • internal/actions/expr/eval_test.go — taint-contract tests
  • .refs/forgejo/services/actions/ — reference architecture
  • Campaign plan in conversation memory (humble-cooking-bunny)
View source
1 # Actions/CI — schema + workflow dialect (S41a)
2
3 The Actions/CI subsystem is shipping in eight sub-sprints (S41a through
4 S41h, plus optional S41i Nix engine). This doc covers what S41a lays
5 down: the SQL schema, the workflow YAML dialect, the expression
6 evaluator, and the load-bearing taint contract every later sub-sprint
7 depends on.
8
9 S41a is parser + schema only — no triggers, no runner, no UI. The
10 goal is to land a frozen contract that S41b/c/d/e can build against
11 without churning under them.
12
13 ## SQL schema
14
15 Actions migrations currently span 0042–0051, 0053, 0057, and 0060.
16 Migration 0052 belongs to the repo source-remotes feature, 0054
17 belongs to push event protocol tracking, 0055 belongs to the social
18 feed, 0056 belongs to user profile contribution settings, 0058 belongs
19 to repo name reuse, and 0059 belongs to GitHub org imports.
20
21 | # | Table | Purpose |
22 | ----- | --------------------------- | ------------------------------------------------------------- |
23 | 0042 | `workflow_runs` | One row per triggered workflow execution |
24 | 0043 | `workflow_jobs` | Jobs within a run (one row per `jobs.<key>`) |
25 | 0044 | `workflow_steps` | Steps within a job (one row per `steps[i]`) |
26 | 0045 | `workflow_secrets` | Per-repo + per-org encrypted secrets |
27 | 0046 | `workflow_runners` | Registered runners + `runner_tokens` |
28 | 0047 | `workflow_step_log_chunks` | Hot-path append log buffer (concatenated to blob on finalize) |
29 | 0048 | `workflow_artifacts` | Per-run artifact metadata (90-day default expiry) |
30 | 0049 | `actions_variables` | Non-secret per-repo/org config (Forgejo parity) |
31 | 0050 | `workflow_steps.step_with` | Parsed `with:` inputs for magic `uses:` aliases |
32 | 0051 | `workflow_runs.trigger_event_id` | Trigger idempotency for retries/admin replays |
33 | 0053 | `runner_jwt_used` | Single-use replay gate for runner job JWTs |
34 | 0057 | `workflow_job_secret_masks` | Encrypted claim-time log mask snapshots per job |
35 | 0060 | Actions retention indexes | Narrow cleanup indexes for terminal steps/runs |
36
37 A few load-bearing choices, called out so they're easy to spot in a
38 later schema diff:
39
40 - **`workflow_runs.run_index`** — per-repo monotonic counter. Each
41 repo gets `#1`, `#2`, … so URLs like
42 `/{owner}/{repo}/actions/runs/42` are stable and human-friendly.
43 Crib from Forgejo's `actions_run.index`.
44 - **`workflow_runs.version`** — optimistic-lock counter. Mutators
45 bump-and-check rather than `SELECT … FOR UPDATE`. Required for
46 S41g's race between a cancel request and a state transition.
47 - **`workflow_runs.concurrency_group`** — the concurrency-slot key,
48 resolved at trigger time from the workflow's `concurrency.group:`
49 expression. S41g's slot manager keys off this column and runner
50 claim blocks younger runs while an older same-group run still has a
51 queued/running job without `cancel_requested=true`.
52 - **`workflow_runs.parent_run_id`** — for re-runs. The new run
53 references the original; the UI shows a "re-ran from #N" link.
54 - **`workflow_jobs.runner_id`** — FK added in 0046 (after the
55 runners table exists). Nullable until claimed.
56 - **`workflow_steps`** has a CHECK constraint enforcing
57 `(run_command IS NOT NULL) <> (uses_alias IS NOT NULL)` — exactly
58 one of `run:` or `uses:`. The `uses_alias` column is further
59 CHECK-constrained to the three magic aliases we accept in v1.
60 - **`workflow_secrets`** owns its value as `bytea` ChaCha20Poly1305-
61 sealed via `internal/auth/secretbox`. Key derivation uses
62 `cfg.Auth.TOTPKeyB64` (already an operator-managed root) +
63 `(owner, kind, name)` salt so re-keying is per-row.
64 - **`workflow_step_log_chunks.chunk`** is capped at 512 KB per row.
65 The runner sends bigger payloads in pieces. `(step_id, seq)` is
66 UNIQUE so duplicate sends are idempotent.
67 - **`actions_variables`** — non-secret, plaintext, scoped exactly
68 like secrets (per-repo or per-org, never both on the same row).
69 Forgejo has the same split; we mirror it for parity.
70 - **`runner_jwt_used`** — primary-keyed by JWT `jti`. Job endpoints
71 insert into this table during auth; zero inserted rows means replay
72 and the API returns 401. JWTs are HMAC-SHA256 and use an HKDF
73 subkey derived from `auth.totp_key_b64` with label
74 `actions-runner-jwt-v1`.
75 - **`workflow_job_secret_masks`** — one encrypted JSON array of exact
76 secret values per claimed job. It snapshots the log scrub set at
77 claim time, preventing a rotated or deleted secret from disappearing
78 from server-side masking while the old value is still in a runner's
79 job payload.
80
81 The `version` and `run_index` patterns are the two pieces I'd point
82 out to a future maintainer first. Both are cheap to add now and
83 miserable to retrofit later.
84
85 ## Workflow YAML dialect (v1)
86
87 We accept a strict subset of GitHub Actions YAML. The parser rejects
88 unknown keys at parse time so workflow authors find their typos
89 immediately instead of shipping a workflow that does nothing.
90
91 ### Top level
92
93 ```yaml
94 name: my-pipeline # optional human name
95 on: [push, pull_request] # or full-form (see below)
96 permissions: read-all # default if omitted
97 env: { GREETING: "hello" } # workflow-level env
98 concurrency: # optional slot manager
99 group: ${{ shithub.ref }}
100 cancel-in-progress: true
101 jobs:
102 <key>: # 1+ entries
103 runs-on: ubuntu-latest
104 needs: [other-key] # optional dep edge
105 if: ${{ shithub.actor == 'alice' }} # optional gate
106 timeout-minutes: 60 # 1..4320, default 360
107 permissions: { contents: read } # narrow workflow perms
108 env: { K: v } # job overlay
109 steps:
110 - name: ...
111 id: ...
112 if: ...
113 run: echo hi # run XOR uses
114 uses: actions/checkout@v4 # exactly one of three aliases
115 working-directory: ...
116 env: { ... }
117 continue-on-error: false
118 ```
119
120 ### Triggers (`on:`)
121
122 v1 supports four triggers — anything else is a parse error.
123
124 | Trigger | Surface |
125 | ------------------- | ---------------------------------------------------------------- |
126 | `push` | `branches:`, `tags:`, `paths:` (include + `!exclude` semantics) |
127 | `pull_request` | `types:` (opened/synchronize/reopened/...), `branches:`, `paths:` |
128 | `schedule` | one or more `- cron: <5-field-expr>` |
129 | `workflow_dispatch` | `inputs:` map (string/boolean/choice/environment) |
130
131 ### `uses:` allowlist
132
133 Exactly three aliases are reserved at parse time, no exceptions:
134
135 | Alias | Parser status | Runner status |
136 | -------------------------------- | ------------- | ------------------------------------------ |
137 | `actions/checkout@v4` | accepted | rejected until checkout support lands |
138 | `shithub/upload-artifact@v1` | accepted | rejected until artifact upload lands |
139 | `shithub/download-artifact@v1` | accepted | rejected until artifact download lands |
140
141 Any other `uses:` value (community actions, Docker images, composite
142 actions) is an Error-severity diagnostic. The marketplace problem is
143 explicitly out of scope for v1; revisit only if a real demand exists
144 and we have an answer for supply-chain trust.
145
146 The current Docker executor runs `run:` steps only. It fails a reserved
147 `uses:` alias deliberately instead of pretending checkout/artifact
148 semantics exist. This keeps the first end-to-end smoke path honest:
149 `run:`-only workflows are executable now, while repository checkout and
150 artifact transfer remain explicit follow-up work.
151
152 ### File-size + parser caps
153
154 - **64 KB** workflow file size cap (`workflow.MaxWorkflowFileBytes`).
155 Files larger than this are rejected before YAML decode begins —
156 defends against pathological inputs and gives operators a
157 predictable upper bound on parser memory.
158 - **100 anchors** per document (`workflow.MaxYAMLAliases`) — the
159 billion-laughs guard. yaml.v3 doesn't expose a direct knob; we
160 count alias nodes during a tree walk and bail.
161
162 ### `${{ github.* }}` alias
163
164 The dialect is intentionally rebranded to `${{ shithub.* }}`.
165 Authors who paste GHA workflows in unmodified will see their
166 `${{ github.* }}` references continue to work because the evaluator
167 rewrites `path[0]` from `github` to `shithub` at the top of `evalRef`
168 before taint computation, dispatch, and error rendering.
169
170 The alias is intentionally **scope-narrow**: only fields that exist
171 in our `shithub.*` namespace (`run_id`, `sha`, `ref`, `actor`,
172 `event`) route through. GHA fields we don't expose in v1 —
173 `event_name`, `repository`, `run_number`, `workspace`, etc. — error
174 with the canonical `unknown shithub field "X"` message. Slightly
175 confusing for a GHA-flavored author but keeps the v1 namespace
176 surface tight.
177
178 The alias preserves the load-bearing taint flag: `github.event.X`
179 taints exactly like `shithub.event.X`. `TestEval_GithubAliasIsTainted`
180 pins this contract.
181
182 Migration to strict-compat (drop the alias entirely) later is a
183 one-PR flip; moving the other direction is much harder.
184
185 This is a deliberate decision recorded in the campaign plan.
186
187 ## Expression evaluator
188
189 `${{ … }}` expressions are parsed into a tiny AST and evaluated by
190 `internal/actions/expr`. The surface is intentionally minimal:
191
192 ### Allowed namespaces
193
194 | Namespace | Source | Tainted? |
195 | ---------------- | ----------------- | --------------------------- |
196 | `secrets.X` | workflow_secrets | no, but sensitive |
197 | `vars.X` | actions_variables | no (operator-controlled) |
198 | `env.X` | workflow file | no (workflow author's text) |
199 | `shithub.run_id` | dispatch context | no |
200 | `shithub.sha` | dispatch context | no |
201 | `shithub.ref` | dispatch context | no |
202 | `shithub.actor` | dispatch context | no (resolved username) |
203 | `shithub.event.*`| trigger payload | **yes — always** |
204
205 `runner.*`, `steps.*`, `needs.*`, `matrix.*`, `inputs.*` are all
206 parse-time errors. They're parked for v2 and the parser's
207 allowlist-closed posture means a future PR can't widen this
208 accidentally without a clearly visible diff.
209
210 ### Allowed functions
211
212 `contains(haystack, needle)`, `startsWith(s, prefix)`,
213 `endsWith(s, suffix)`, plus the four job-status predicates
214 `success()`, `failure()`, `cancelled()`, `always()`. That's the
215 whole list. `fromJSON`, `hashFiles`, `toJSON`, `format`, and
216 friends are explicitly rejected — they each carry footgun risk
217 (parser DoS, FS access, side-channel injection) that we don't want
218 to take on in v1.
219
220 ### Missing-value semantics
221
222 | Reference | Missing → ? |
223 | -------------------------------- | ------------------------------------ |
224 | `secrets.NOT_BOUND` | error (loud — workflow won't run) |
225 | `vars.MISSING` | empty string (GHA parity) |
226 | `env.MISSING` | empty string (GHA parity) |
227 | `shithub.event.deeply.missing` | null **but still tainted** |
228
229 The "missing event path → null but tainted" case is a defence-in-
230 depth choice: even if the path doesn't resolve, the result still
231 came from the event payload, and we'd rather over-flag than under.
232
233 ## Taint contract — the load-bearing piece
234
235 This is the contract every later sub-sprint hangs off. Get it wrong
236 and we have an injection-shaped hole in the runner.
237
238 ### Where the flag lives
239
240 The taint flag lives on `expr.Value` (the evaluator-produced value),
241 not `workflow.Value` (the parser-produced value). Two different
242 structs share the name `Value` because they live in different
243 packages, but they have different jobs:
244
245 - **`workflow.Value`** carries the raw source string the parser read
246 out of the YAML (an env entry, a `with:` input, a concurrency
247 group expression). At parse time we don't know what the
248 `${{ … }}` body will resolve to, so there's nothing to taint yet.
249 - **`expr.Value`** is what the evaluator returns when it resolves a
250 reference at runtime. *This* struct carries `Tainted bool`. The
251 runner's exec layer (S41d) consumes that flag.
252
253 Pre-L5 the parser-side struct also had a `Tainted bool` field plus a
254 `Tainted()` constructor — both unused, both confusing because they
255 suggested two sources of truth. Dropped in S41a-L5 cleanup.
256
257 ### Propagation
258
259 **Every `expr.Value` carries a `Tainted bool`.** Set true iff the
260 value transitively depends on `shithub.event.*`. Operators control
261 secrets, vars, env, the rest of `shithub.*`. Authors control the
262 workflow file. Only the event payload is *attacker-controlled*: a
263 PR title, a commit message, a branch name from a fork. Those values
264 must never be interpolated into a shell string.
265
266 Propagation rules:
267
268 - Reading `shithub.event.X``Tainted: true` (always, including
269 missing-path null results).
270 - Reading `secrets.X``Sensitive: true`. Secrets are operator-
271 controlled, so they are not tainted, but they must not appear in
272 shell source strings or Docker argv.
273 - Reading any other namespace → `Tainted: false` and
274 `Sensitive: false`, except `env.X` preserves both flags of the
275 resolved env value. This closes the escape where an event-derived or
276 secret-derived value is first assigned to env and then interpolated
277 through `${{ env.X }}`.
278 - Binary op (`==`, `!=`, `&&`, `||`) → tainted or sensitive if either
279 operand is.
280 - Unary op (`!`) → tainted/sensitive iff its operand is.
281 - Function call (`contains`, `startsWith`, `endsWith`) → tainted or
282 sensitive if any argument is.
283
284 The runner consumes `Tainted` and `Sensitive` and refuses to interpolate
285 either class into shell strings. Instead, those values are bound to
286 runner-owned `SHITHUB_INPUT_xx` envvars and the shell source only
287 references those placeholders. The author writes:
288
289 ```yaml
290 - run: echo "PR title was: ${{ shithub.event.pull_request.title }}"
291 ```
292
293 The runner sees a tainted reference; it compiles the step to:
294
295 ```bash
296 SHITHUB_INPUT_0="$user_pr_title" exec sh -c 'echo "PR title was: $SHITHUB_INPUT_0"'
297 ```
298
299 …where `$user_pr_title` is set via Go's `cmd.Env`, never inserted into
300 the shell source string or Docker CLI argv. Backticks, `$()`, `;`,
301 `&&` — none of those work as command-injection vectors when the value
302 reaches the shell as environment data instead of syntax.
303
304 The shared renderer lives in `internal/runner/exec`, so future engines
305 consume the same injection boundary instead of reimplementing it. The
306 runner claim payload includes `workflow_runs.event_payload`; without
307 that field, the runner cannot evaluate and taint
308 `${{ shithub.event.* }}` references.
309
310 Tests for this contract live in `internal/actions/expr/eval_test.go`,
311 `internal/runner/exec/render_test.go`, and
312 `internal/runner/engine/docker_test.go`. **Do not** weaken them in a
313 later PR without an audit-checkpoint review — they're explicitly
314 load-bearing for S41e's threat model.
315
316 Runner log chunks pass through `internal/runner/scrub` before they are
317 posted to the API. It masks exact secret values and preserves enough
318 tail bytes between chunks to catch a secret split across chunk
319 boundaries. S41e wires resolved workflow secrets into the runner claim
320 payload and mask set, snapshots that mask set encrypted on the job, then
321 applies the same exact-value scrub again in the runner API before
322 persisting chunks. The server path also carries a possible secret-prefix
323 tail from the prior persisted chunk, so a runner that bypasses
324 client-side scrubbing cannot leak a secret by splitting it across
325 adjacent log POSTs.
326
327 ## `shithub.event` payload schema (v1)
328
329 The event payload is the most user-facing part of the contract: once
330 authors write workflows that template against `shithub.event.X`,
331 schema changes are breaking. The v1 schema is pinned and labelled
332 `v1`. Any addition is fine; renames and removals require a major
333 bump.
334
335 The schema is enforced by **typed constructors** in the
336 `internal/actions/event` package — one per trigger. S41b's pipeline
337 calls these to build payloads; the function signatures pin the
338 field set so adding a key requires editing the constructor in a
339 visible diff. This is the same closed-door discipline as the
340 expression evaluator's namespace allowlist.
341
342 | Trigger | Constructor | Top-level keys |
343 | ------------------- | ----------------------- | --------------------------------------------------------------------------------- |
344 | `push` | `event.Push` | `ref`, `before`, `after`, `head_commit{message,id,author}` |
345 | `pull_request` | `event.PullRequest` | `action`, `number`, `pull_request{title,head{ref,sha},base{ref,sha},user{login}}` |
346 | `schedule` | `event.Schedule` | (empty map — cron fired; cron expression is on the `workflow_runs` row) |
347 | `workflow_dispatch` | `event.WorkflowDispatch`| `inputs{<name>: <stringified>}` |
348
349 Anything not in this table doesn't exist in v1. Accessing it returns
350 null+tainted (the missing-path semantics above).
351
352 **Adding a field**: edit the constructor in `internal/actions/event/`,
353 add a row to this doc, and update the corresponding `*_FlowsThroughEvaluator`
354 test in `event_test.go` so the new path is exercised end-to-end.
355 Reviewer-required note in the commit message — same standard as a
356 new evaluator function.
357
358 **Renaming or removing**: that's a v1→v2 break. Don't.
359
360 ## Operator surface
361
362 `shithubd admin actions parse <file>` reads a workflow off disk,
363 runs the parser, and dumps diagnostics + a canonical JSON rendering
364 of the parsed AST. Useful for:
365
366 - debugging "why is my workflow not picking up changes" reports
367 - validating a workflow file before committing it
368 - producing a stable AST snapshot for inclusion in bug reports
369
370 Exit codes:
371
372 | Code | Meaning |
373 | ---- | --------------------------------------------- |
374 | 0 | clean parse, no Error-severity diagnostics |
375 | 1 | file unreadable, oversized, or YAML malformed |
376 | 2 | parse produced Error-severity diagnostics |
377
378 Other admin surfaces are scoped to later sub-sprints:
379
380 - S41c: `shithubd admin runner register --name <foo>` issues a
381 registration token + writes a row to `workflow_runners`.
382 - S41g: `POST /api/v1/jobs/{id}/cancel` and the repository run-detail
383 UI request cancellation. Running jobs flip `cancel_requested`; queued
384 jobs are made terminal immediately.
385 - S41g: `POST /api/v1/runs/{id}/rerun` and the repository run-detail
386 UI re-run completed/cancelled runs. Re-runs read the workflow YAML
387 from the original run's `head_sha`, create a fresh queued
388 `workflow_runs` row, and set `parent_run_id` to the source run.
389 - S41g: workflow-level `concurrency.group` is resolved at enqueue time
390 against the trigger context (`shithub.ref`, `shithub.sha`, and
391 `shithub.event.*`). With `cancel-in-progress: true`, enqueue requests
392 cancellation for older active runs in the same group. Without it,
393 runner claim leaves the younger run queued until the older run no
394 longer has uncancelled queued/running jobs.
395 - S41g: `workflow:cleanup` is a daily retention worker enqueued by
396 `shithubd-cron.service`. Operators can run it manually with
397 `shithubd admin run-job workflow:cleanup`.
398
399 ## Workflow concurrency (S41g)
400
401 `concurrency.group` is a workflow-level slot key. The parser stores the
402 raw value, and `internal/actions/concurrency` evaluates `${{ ... }}`
403 fragments when the run is enqueued. The trigger-time context deliberately
404 does not include secrets; event-derived values may be tainted but are
405 safe here because the value is only used as a database key.
406
407 When a run enters a non-empty group:
408
409 - `cancel-in-progress: false` leaves the new run queued behind older
410 same-repo, same-group runs while those older runs still have
411 queued/running jobs with `cancel_requested=false`.
412 - `cancel-in-progress: true` requests cancellation on those older jobs.
413 Queued jobs become terminal immediately; running jobs keep running
414 with `cancel_requested=true` so the runner can kill the active
415 container. Once every active older job is cancel-requested, the group
416 is released for the newer run.
417
418 The runner claim query enforces the queueing rule, not the web handler
419 or UI. This keeps heartbeat races honest: multiple runners can poll at
420 the same time, but only jobs whose dependency and concurrency blockers
421 are clear can be claimed.
422
423 ## Runner timeouts (S41g)
424
425 `jobs.<key>.timeout-minutes` is enforced by `shithubd-runner` as a
426 whole-job deadline. The parser stores the value in
427 `workflow_jobs.timeout_minutes` with the GitHub-compatible default of
428 360 minutes and a 1..4320 cap.
429
430 When the deadline expires, the Docker engine explicitly kills the
431 active step container, emits a terminal step update with
432 `status=completed` and `conclusion=timed_out`, and the runner reports
433 the job itself as `completed/timed_out`. The server rolls the parent
434 workflow run up to `timed_out` when all jobs are terminal. A timed-out
435 step is not masked by `continue-on-error`; the job deadline always wins.
436
437 The runner API increments `shithub_actions_step_timeouts_total` the
438 first time a step reaches `conclusion=timed_out`. Duplicate terminal
439 step-status retries do not increment the counter again.
440
441 ## Retention cleanup (S41g)
442
443 `workflow:cleanup` applies the durable Actions retention contract in
444 this order:
445
446 1. Delete hot `workflow_step_log_chunks` for steps completed more than
447 7 days ago. Finalized logs already live in object storage.
448 2. Delete expired `workflow_artifacts` rows after deleting their
449 `actions/runs/...` blob objects. The row's `expires_at` value is
450 authoritative so per-upload retention overrides keep working.
451 3. Delete unpinned terminal `workflow_runs` older than 365 days. Child
452 jobs, steps, artifacts, and consumed JWT rows cascade through FK
453 ownership.
454 4. Delete consumed `runner_jwt_used` rows whose JWT expiry is more than
455 30 days old. This preserves replay/audit evidence for recent jobs
456 without letting the replay table grow forever.
457
458 The defaults can be overridden in the worker payload:
459
460 ```json
461 {"step_log_chunk_days":7,"run_days":365,"jwt_used_days":30,"artifact_batch":1000}
462 ```
463
464 `artifact_batch` caps each object-delete page and may not exceed 10000.
465 Negative values are poison-job errors. The worker exports
466 `shithub_actions_runs_pruned_total{kind}` where `kind` is one of
467 `chunks`, `blobs`, `runs`, or `jwt_used`.
468
469 Production object storage also needs provider-side lifecycle on the
470 same prefix: `deploy/spaces/actions-lifecycle.json` expires
471 `actions/runs/` objects after 90 days and aborts stale multipart
472 uploads after 2 days. Apply it with
473 `deploy/cutover/apply-actions-lifecycle.sh`.
474
475 ## Trigger pipeline (S41b)
476
477 Three layers between a triggering event and a queued `workflow_run`:
478
479 ```
480 caller (push_process / pulls.Create / pr_jobs.PRSynchronize / dispatch HTTP)
481
482 └─► worker.Enqueue(KindWorkflowTrigger, JobPayload)
483
484 └─► trigger.Handler picks up:
485 Discover .shithub/workflows/*.yml at HEAD SHA
486 Parse each (skip + log on Error diagnostics)
487 Match each against trigger.Event
488 Enqueue each match
489
490 └─► trigger.Enqueue (one tx):
491 INSERT workflow_runs (ON CONFLICT DO NOTHING)
492 INSERT workflow_jobs per parsed job
493 INSERT workflow_steps per parsed step
494 (commit)
495 checks.Create per job (post-tx, idempotent
496 via ExternalID 'workflow_run:<id>:job:<key>')
497 ```
498
499 ### Idempotency on the triggering event
500
501 The robust pattern, not a UNIQUE on `(repo_id, head_sha)`. Each
502 caller constructs a stable `trigger_event_id` from its triggering
503 event's identity:
504
505 | Caller | trigger_event_id format |
506 | ------------------- | ------------------------------------------------ |
507 | push_process | `push:<push_event_id>` |
508 | pulls.Create | `pr_opened:<pr_id>:<head_sha>` |
509 | pr_jobs.PRSynchronize | `pr_synchronize:<pr_id>:<head_sha>` |
510 | dispatch HTTP | `dispatch:<file>:<sha>:<8-byte-random-hex>` |
511 | schedule sweep (S41b-2) | `schedule:<workflow_id>:<window_start_unix>` |
512
513 Migration 0051 adds `workflow_runs.trigger_event_id` (text NOT NULL
514 DEFAULT '') with a partial UNIQUE on
515 `(repo_id, workflow_file, trigger_event_id) WHERE trigger_event_id <> ''`.
516 The trigger handler does `INSERT … ON CONFLICT DO NOTHING` so:
517
518 - Worker retries (the same push_process replay) → no duplicate runs.
519 - Admin replays via `shithubd admin run-job workflow:trigger ...`
520 → no duplicate runs.
521 - Re-runs explicitly construct a NEW
522 trigger_event_id (`rerun:<original_run_id>:<request_uuid>`) and
523 chain back via `parent_run_id`. History is preserved, no
524 collision.
525
526 Each caller's collision-free namespace is short-lived and
527 human-debuggable: a Postgres operator can grep
528 `workflow_runs.trigger_event_id` to see exactly which triggering
529 event produced a given run.
530
531 ### Filter evaluation
532
533 `trigger.Match(workflow, event)` is a pure function (no I/O, no DB).
534 For each event kind:
535
536 - **push**: branch vs tag classified from the ref; only the matching
537 filter list applies (a `branches:` filter rejects tag pushes and
538 vice versa). `paths:` (when set) requires at least one changed
539 path to match. Empty filter = match-all.
540 - **pull_request**: `types:` defaults to
541 `[opened, synchronize, reopened]` when omitted (GHA parity).
542 `branches:` applies to the **base** ref. `paths:` as for push.
543 - **schedule**: requires the workflow to declare the cron expression
544 that fired. The sweep is the source of truth for which cron
545 fires; we just gate on declaration. Avoids interpreting cron
546 semantics in two places.
547 - **workflow_dispatch**: matches whenever the workflow declares
548 `on.workflow_dispatch`.
549
550 Glob semantics in `branches:`/`tags:`/`paths:`: minimatch subset
551 with `*` (single segment), `**` (any), `/**` end-anchor (optional
552 trailing path), `**/` start-anchor, and `!exclude` (last-match-wins,
553 exclusion-only list implies include-all).
554
555 ### Collaborator gate
556
557 Per the S41b spec's "external-PR support is parked" decision: PR
558 triggers (both `opened` and `synchronize`) only fire when the PR's
559 author is the repo's owning user. Conservative — drops legitimate
560 non-owner collaborators in the org-repo case. Expanding the gate
561 requires plumbing `policy.Can` into the worker context, which we
562 defer to S41g where the lifecycle work touches that surface anyway.
563
564 ### Operator surface
565
566 - `POST /{owner}/{repo}/actions/workflows/{file}/dispatches`
567 Body: `{"ref": "...", "inputs": {"key": "value"}}` (both optional;
568 ref defaults to the repo's default branch). Returns 204 No Content
569 on success. Synchronous trigger.Enqueue (no discovery — file is
570 named in the URL). Auth: requires repo write.
571 - `GET /{owner}/{repo}/actions.atom`
572 Returns the last 50 workflow runs as an Atom feed. Auth and visibility
573 match the Actions tab (`repo:read`). Entries link to
574 `/{owner}/{repo}/actions/runs/{run_index}` and include the workflow
575 name/path, event, branch, short SHA, status, and conclusion.
576
577 ### Webhook events (S41h)
578
579 Actions emits webhook-facing domain events through `notif.EmitTx` on
580 state transitions:
581
582 - `workflow_run`, with `payload.action` set to `queued`, `running`, or
583 `completed` (`completed` may carry `conclusion:"cancelled"`).
584 - `workflow_job`, with `payload.action` set to `queued`, `running`,
585 `completed`, or `cancelled`.
586
587 Payloads are structural snapshots only. They include ids, run index,
588 workflow path/name, head SHA/ref, event kind, status, conclusion,
589 timestamps, job key/name/runner id, needs, timeout, and cancellation
590 state. They deliberately exclude `workflow_runs.event_payload`, env,
591 permissions, logs, runner JWTs, and secret values. This keeps the
592 webhook surface stable without turning arbitrary workflow input into
593 subscriber-facing data.
594
595 ### What S41b deliberately doesn't do
596
597 - Run jobs. S41c adds runner claim/status APIs; S41d adds the actual
598 `shithubd-runner` execution binary.
599 - Schedule sweep. Cron-driven triggers split into S41b-2 to keep
600 this PR reviewable; the trigger pipeline accepts schedule events,
601 but no caller produces them yet. S41b-2 adds the sweep + the
602 `robfig/cron/v3` dep + `shithubd-cron.service` wiring.
603 - External-PR triggers. Conservative collaborator gate above.
604
605 ## Secrets + variables settings surface (S41c)
606
607 S41c wires the previously schema-only `workflow_secrets` and
608 `actions_variables` tables into repo/org settings.
609
610 Repository routes are gated through
611 `policy.ActionRepoSettingsActions` (`repo:settings:actions`, admin
612 role minimum):
613
614 - `GET /{owner}/{repo}/settings/secrets/actions`
615 - `POST /{owner}/{repo}/settings/secrets/actions`
616 - `POST /{owner}/{repo}/settings/secrets/actions/{name}/delete`
617 - `GET /{owner}/{repo}/settings/variables/actions`
618 - `POST /{owner}/{repo}/settings/variables/actions`
619 - `POST /{owner}/{repo}/settings/variables/actions/{name}/delete`
620
621 Organization routes follow the existing org-settings prefix and are
622 owner-only:
623
624 - `GET /organizations/{org}/settings/secrets/actions`
625 - `POST /organizations/{org}/settings/secrets/actions`
626 - `POST /organizations/{org}/settings/secrets/actions/{name}/delete`
627 - `GET /organizations/{org}/settings/variables/actions`
628 - `POST /organizations/{org}/settings/variables/actions`
629 - `POST /organizations/{org}/settings/variables/actions/{name}/delete`
630
631 Secrets are sealed through `internal/auth/secretbox` using the
632 operator-managed `Auth.TOTPKeyB64` root key. Secret list pages render
633 names/metadata only; the plaintext value is accepted once on create or
634 rotation and never rendered back. Variables are non-secret plaintext
635 configuration, so settings pages render their values. Both stores use
636 the same name grammar as the database constraints:
637 `^[A-Za-z_][A-Za-z0-9_]*$`, 1-100 characters. Variables additionally
638 enforce the 4096-character value cap in Go before hitting the DB
639 constraint.
640
641 ## What S41a deliberately doesn't do
642
643 - No trigger pipeline. `domain_events` aren't matched against `on:`
644 yet — that's S41b.
645 - No runner. S41c/S41d add runner claim APIs and the execution binary.
646 - No UI. The Actions tab still renders the placeholder — S41f.
647 - No secret encryption helpers wired to anything writable — S41c.
648 - No JWT issuance, no runner registration flow — S41c.
649 - No log streaming, no SSE — S41d/f.
650 - No execution sandbox, no scrubbing, no injection guards
651 *enforced at the runner* — S41d/e (the parser-side taint contract
652 is the foundation those depend on, not a substitute).
653
654 ## Why these choices, in two paragraphs
655
656 The schema work is front-loaded so later sub-sprints don't ripple a
657 migration through every PR. `version` (optimistic locking) and
658 `run_index` (per-repo monotonic) are the two columns I'd flag to a
659 new maintainer immediately — both are nearly free to add up front
660 and painful to retrofit. The split between hot-path log chunks
661 (Postgres) and finalized blob (Spaces) is shaped after Forgejo's
662 log path; we pick the boring well-trodden answer over the clever
663 one because log throughput is the failure mode that bites first.
664
665 The taint contract is the security-load-bearing piece. Every later
666 sub-sprint trusts that the `Tainted` flag is set correctly here, in
667 the parser/evaluator, and never re-derived downstream. The narrow
668 allowlist of namespaces and functions exists exactly so a future PR
669 that adds, say, `fromJSON` has to do it knowingly — by widening the
670 allowlist in a visible diff, with a reviewer-required note, rather
671 than by accident. The `${{ github.* }}` alias is a pragmatic
672 concession to copy-paste users; the rebrand to `${{ shithub.* }}`
673 is the canonical form so future divergence isn't awkward.
674
675 ## See also
676
677 - `internal/actions/workflow/parse.go` — the parser
678 - `internal/actions/expr/eval.go` — the evaluator
679 - `internal/migrationsfs/migrations/0042..0049_*.sql` — the schema
680 - `tests/fixtures/workflows/*.yml` — canonical input shapes
681 - `internal/actions/workflow/parse_test.go` — fixture-driven tests
682 - `internal/actions/expr/eval_test.go` — taint-contract tests
683 - `.refs/forgejo/services/actions/` — reference architecture
684 - Campaign plan in conversation memory (humble-cooking-bunny)