@@ -7,8 +7,10 @@ No telemetry, no uploads, no cloud. Built on PyTorch + HuggingFace with a |
| 7 | 7 | hardware-aware planner that picks precision, attention, and batching for your |
| 8 | 8 | box. |
| 9 | 9 | |
| 10 | | -**Status:** pre-alpha. The foundation (CLI, document parser, content-addressed |
| 11 | | -store, hardware doctor) is landing now; real training lands next. |
| 10 | +**Status:** pre-release. The full v1.0 command surface is wired — |
| 11 | +`init`, `train`, `prompt`, `export`, `pack`, `unpack`, `doctor`, |
| 12 | +`show`, `migrate`. Reproducibility-lock and docs polish are the |
| 13 | +remaining Phase 3 work before a tagged release. |
| 12 | 14 | |
| 13 | 15 | ## What it does |
| 14 | 16 | |
@@ -56,8 +58,6 @@ is pinned in the CLI; `dlm doctor` reports it). |
| 56 | 58 | |
| 57 | 59 | ## Quickstart |
| 58 | 60 | |
| 59 | | -Once training lands (Sprint 9 — not shipped yet), the loop is: |
| 60 | | - |
| 61 | 61 | ```sh |
| 62 | 62 | uv run dlm init mydoc.dlm # scaffold a new .dlm |
| 63 | 63 | # edit mydoc.dlm — write prose, add ### Q / ### A pairs, etc. |
@@ -67,8 +67,10 @@ uv run dlm export mydoc.dlm --name mydoc # register with Ollama |
| 67 | 67 | ollama run mydoc # use it |
| 68 | 68 | ``` |
| 69 | 69 | |
| 70 | | -Today, `dlm doctor` and the `.dlm` parser surface are functional; other |
| 71 | | -subcommands are stubs that report which release will implement them. |
| 70 | +`dlm pack mydoc.dlm` produces a portable `.dlm.pack` bundle you can |
| 71 | +hand off to another machine; `dlm unpack` installs it on the other end. |
| 72 | +`dlm show mydoc.dlm` prints training history, exports, and adapter |
| 73 | +state; `dlm doctor` reports the resolved hardware plan. |
| 72 | 74 | |
| 73 | 75 | ## Principles |
| 74 | 76 | |