markdown · 3090 bytes Raw Blame History

Social Feed

S42 turns the S26 social primitives into a GitHub-like network surface: follow graph, authenticated Home feed, public Explore feed, and cached trending rankings.

Follow Graph

follows stores one follower user and exactly one target:

  • followee_user_id for user profiles.
  • followee_org_id for organization profiles.

The schema enforces target XOR, blocks user self-follows, cascades on deleted users/orgs, and uses partial unique indexes so follow/unfollow is idempotent. State changes go through internal/social and record audit rows when an audit recorder is supplied.

Follow actions emit public user-scoped domain_events:

  • followed_user, source_kind = "user", source_id = target_user_id
  • followed_org, source_kind = "org", source_id = target_org_id

The web layer exposes profile/org Follow buttons and follower/following tabs. Suspended actors are rejected by middleware/policy before mutation.

Feeds

Feeds read from domain_events; handlers never hand-roll visibility logic. Public feeds require domain_events.public = true, non-deleted actors, non-suspended actors, and a current public repo if the event is repo-scoped. This second repo visibility check is load-bearing: an event emitted while a repo was public must not leak after the repo becomes private.

The authenticated Home feed includes:

  • the viewer's own public activity,
  • public activity from followed users,
  • public activity from repos the viewer watches,
  • public activity from repos owned by followed orgs,
  • public org-scoped activity for followed orgs.

Explore uses the global public feed. Both feeds page with a keyset cursor over (created_at, id).

Event Kinds

Current feed sources include:

  • repo_created
  • push
  • star / unstar
  • forked
  • issue_created, comments, close/reopen, assignment events
  • pr_opened and pull-request comment events
  • followed_user / followed_org

The kind and source_kind columns remain text. New product surfaces can add events without a schema migration as long as their payload is small JSON and the public flag is set conservatively.

trending_snapshots stores denormalized rankings for day/week/month windows and two kinds:

  • repos
  • users

The trending:compute worker job captures all six snapshots. A job with an empty payload schedules its next run one hour later; pass {"schedule_next":false} for a one-off recompute. Explore reads the latest weekly snapshot and falls back to live computation before the first worker run.

The repo score weights recent public stars, forks, and unique push actors. The user score weights recent followers plus recent public event activity.

Operational Notes

Seed the recurring job once after deploy:

INSERT INTO jobs (kind, payload) VALUES ('trending:compute', '{}');
SELECT pg_notify('shithub_jobs', '');

The job is safe to re-run. Multiple recurring seeds produce multiple hourly refresh jobs, so operators should keep one scheduled chain per instance unless they intentionally want a shorter effective interval.

View source
1 # Social Feed
2
3 S42 turns the S26 social primitives into a GitHub-like network surface:
4 follow graph, authenticated Home feed, public Explore feed, and cached
5 trending rankings.
6
7 ## Follow Graph
8
9 `follows` stores one follower user and exactly one target:
10
11 - `followee_user_id` for user profiles.
12 - `followee_org_id` for organization profiles.
13
14 The schema enforces target XOR, blocks user self-follows, cascades on
15 deleted users/orgs, and uses partial unique indexes so follow/unfollow is
16 idempotent. State changes go through `internal/social` and record audit
17 rows when an audit recorder is supplied.
18
19 Follow actions emit public user-scoped `domain_events`:
20
21 - `followed_user`, `source_kind = "user"`, `source_id = target_user_id`
22 - `followed_org`, `source_kind = "org"`, `source_id = target_org_id`
23
24 The web layer exposes profile/org Follow buttons and follower/following
25 tabs. Suspended actors are rejected by middleware/policy before mutation.
26
27 ## Feeds
28
29 Feeds read from `domain_events`; handlers never hand-roll visibility
30 logic. Public feeds require `domain_events.public = true`, non-deleted
31 actors, non-suspended actors, and a current public repo if the event is
32 repo-scoped. This second repo visibility check is load-bearing: an event
33 emitted while a repo was public must not leak after the repo becomes
34 private.
35
36 The authenticated Home feed includes:
37
38 - the viewer's own public activity,
39 - public activity from followed users,
40 - public activity from repos the viewer watches,
41 - public activity from repos owned by followed orgs,
42 - public org-scoped activity for followed orgs.
43
44 Explore uses the global public feed. Both feeds page with a keyset
45 cursor over `(created_at, id)`.
46
47 ## Event Kinds
48
49 Current feed sources include:
50
51 - `repo_created`
52 - `push`
53 - `star` / `unstar`
54 - `forked`
55 - `issue_created`, comments, close/reopen, assignment events
56 - `pr_opened` and pull-request comment events
57 - `followed_user` / `followed_org`
58
59 The `kind` and `source_kind` columns remain text. New product surfaces
60 can add events without a schema migration as long as their payload is
61 small JSON and the public flag is set conservatively.
62
63 ## Trending
64
65 `trending_snapshots` stores denormalized rankings for day/week/month
66 windows and two kinds:
67
68 - `repos`
69 - `users`
70
71 The `trending:compute` worker job captures all six snapshots. A job with
72 an empty payload schedules its next run one hour later; pass
73 `{"schedule_next":false}` for a one-off recompute. Explore reads the
74 latest weekly snapshot and falls back to live computation before the
75 first worker run.
76
77 The repo score weights recent public stars, forks, and unique push
78 actors. The user score weights recent followers plus recent public event
79 activity.
80
81 ## Operational Notes
82
83 Seed the recurring job once after deploy:
84
85 ```sql
86 INSERT INTO jobs (kind, payload) VALUES ('trending:compute', '{}');
87 SELECT pg_notify('shithub_jobs', '');
88 ```
89
90 The job is safe to re-run. Multiple recurring seeds produce multiple
91 hourly refresh jobs, so operators should keep one scheduled chain per
92 instance unless they intentionally want a shorter effective interval.