markdown · 3069 bytes Raw Blame History

Loader

Loader is a local-first coding assistant that runs against Ollama and drives a tool-using agent loop from the terminal or a Textual TUI.

Requirements

  • Python 3.11+
  • uv
  • a running Ollama server on http://localhost:11434
  • at least one pulled Ollama model (see MODELS.md)

Install

# Global install — use loader from any directory
uv tool install git+https://github.com/tenseleyFlow/loader
loader "write a hello world program"

# Or install from a local clone
git clone https://github.com/tenseleyFlow/loader
cd loader
uv tool install -e .

Development setup

uv sync --extra dev

uv run loader                          # TUI mode
uv run loader --no-tui                 # terminal mode
uv run loader --select-model           # pick an Ollama model

uv run pytest                          # run all tests
uv run pytest tests/test_foo.py -q     # single file
uv run ruff check src tests            # lint
uv run mypy src                        # type check (strict)

How it works

Loader sends your prompt to a local Ollama model with tool schemas attached. The model calls tools (read, write, edit, bash, glob, grep, git) to complete the task. A typed turn loop drives the agent cycle:

  1. Prepare — detect project context, build system prompt, set workflow mode
  2. Assistant — stream the model response, extract tool calls
  3. Tools — execute the tool batch, record results to session
  4. Completion — check definition-of-done, run verification if needed
  5. Repeat or finalize based on whether the task is complete

The TUI shows tool calls with previews, an approval bar for writes outside the workspace, streaming output, and a status line with session state.

Key options

--permission-mode    read-only | workspace-write (default) | danger-full-access | prompt | allow
--select-model       choose from installed Ollama models
--plan               start in plan mode (outline before coding)
--clarify            start in clarify mode (ask questions first)
--react              force text-based tool calling (for models without native support)
--ctx N              context window size (default 8192)

Repository layout

  • src/loader/runtime/ — turn engine, tool execution, verification, workflow routing
  • src/loader/tools/ — tool implementations (file, shell, search, git, workflow)
  • src/loader/llm/ — Ollama backend with native tool calling and streaming
  • src/loader/ui/ — Textual TUI with tool widgets, approval bar, status line
  • src/loader/cli/ — Click CLI entry point
  • tests/ — 416 deterministic tests with scripted backend harness
  • .docs/ — sprint planning, parity checkpoints, architecture analysis

Documentation

View source
1 # Loader
2
3 Loader is a local-first coding assistant that runs against Ollama and drives a tool-using agent loop from the terminal or a Textual TUI.
4
5 ## Requirements
6
7 - Python 3.11+
8 - [uv](https://docs.astral.sh/uv/)
9 - a running Ollama server on `http://localhost:11434`
10 - at least one pulled Ollama model (see [MODELS.md](MODELS.md))
11
12 ## Install
13
14 ```bash
15 # Global install — use loader from any directory
16 uv tool install git+https://github.com/tenseleyFlow/loader
17 loader "write a hello world program"
18
19 # Or install from a local clone
20 git clone https://github.com/tenseleyFlow/loader
21 cd loader
22 uv tool install -e .
23 ```
24
25 ## Development setup
26
27 ```bash
28 uv sync --extra dev
29
30 uv run loader # TUI mode
31 uv run loader --no-tui # terminal mode
32 uv run loader --select-model # pick an Ollama model
33
34 uv run pytest # run all tests
35 uv run pytest tests/test_foo.py -q # single file
36 uv run ruff check src tests # lint
37 uv run mypy src # type check (strict)
38 ```
39
40 ## How it works
41
42 Loader sends your prompt to a local Ollama model with tool schemas attached. The model calls tools (read, write, edit, bash, glob, grep, git) to complete the task. A typed turn loop drives the agent cycle:
43
44 1. **Prepare** — detect project context, build system prompt, set workflow mode
45 2. **Assistant** — stream the model response, extract tool calls
46 3. **Tools** — execute the tool batch, record results to session
47 4. **Completion** — check definition-of-done, run verification if needed
48 5. **Repeat** or **finalize** based on whether the task is complete
49
50 The TUI shows tool calls with previews, an approval bar for writes outside the workspace, streaming output, and a status line with session state.
51
52 ## Key options
53
54 ```
55 --permission-mode read-only | workspace-write (default) | danger-full-access | prompt | allow
56 --select-model choose from installed Ollama models
57 --plan start in plan mode (outline before coding)
58 --clarify start in clarify mode (ask questions first)
59 --react force text-based tool calling (for models without native support)
60 --ctx N context window size (default 8192)
61 ```
62
63 ## Repository layout
64
65 - `src/loader/runtime/` — turn engine, tool execution, verification, workflow routing
66 - `src/loader/tools/` — tool implementations (file, shell, search, git, workflow)
67 - `src/loader/llm/` — Ollama backend with native tool calling and streaming
68 - `src/loader/ui/` — Textual TUI with tool widgets, approval bar, status line
69 - `src/loader/cli/` — Click CLI entry point
70 - `tests/` — 416 deterministic tests with scripted backend harness
71 - `.docs/` — sprint planning, parity checkpoints, architecture analysis
72
73 ## Documentation
74
75 - [MODELS.md](MODELS.md) — recommended Ollama models
76 - [.docs/REPORT.md](.docs/REPORT.md) — deep analysis vs reference implementations
77 - [.docs/PARITY.md](.docs/PARITY.md) — runtime feature inventory
78 - [.docs/sprints/](.docs/sprints/index.md) — sprint planning