markdown · 1980 bytes Raw Blame History

Recommended Ollama Models for Loader

Currently Installed

Model Size
qwen2.5-coder:14b 9.0 GB
qwen2.5-coder:7b 4.7 GB
qwen2.5:14b 9.0 GB
deepseek-r1:14b 9.0 GB
deepseek-coder-v2:16b 8.9 GB
codellama:13b 7.4 GB
gemma2:9b 5.4 GB
mistral:7b 4.4 GB
llama3.2:3b 2.0 GB
phi3:mini 2.2 GB

Models to Try Next

Heavy Hitters (best quality, needs more VRAM)

Model Size Why
qwen2.5-coder:32b ~20GB Best open coding model, rivals GPT-4 on benchmarks
deepseek-r1:32b ~20GB Larger reasoning model, even better multi-step logic
codestral:22b ~13GB Mistral's dedicated coding model, excellent tool use
llama3.3:70b ~40GB Meta's flagship, state-of-the-art instruction following

Mid-Size Sweet Spot

Model Size Why
starcoder2:15b ~9GB BigCode's latest, trained on massive code corpus
granite-code:20b ~12GB IBM's code model, strong at enterprise patterns
yi-coder:9b ~5.5GB 01.AI's coding model, great at code completion
phi4:14b ~8GB Microsoft's latest, punches above its weight

Lightweight Speed Demons

Model Size Why
llama3.3:latest ~4.5GB Latest Llama with improved instruction following
qwen2.5-coder:3b ~2GB Tiny but surprisingly capable for quick tasks
deepseek-r1:7b ~4.7GB Reasoning in a smaller package
codegemma:7b ~5GB Google's code-specific Gemma variant

Pull Commands

# Heavy hitters (if you have the VRAM)
ollama pull qwen2.5-coder:32b
ollama pull deepseek-r1:32b
ollama pull codestral:22b

# Mid-size (recommended next pulls)
ollama pull starcoder2:15b
ollama pull granite-code:20b
ollama pull yi-coder:9b
ollama pull phi4:14b

# Lightweight
ollama pull llama3.3
ollama pull qwen2.5-coder:3b
ollama pull deepseek-r1:7b
ollama pull codegemma:7b
View source
1 # Recommended Ollama Models for Loader
2
3 ## Currently Installed
4
5 | Model | Size |
6 |-------|------|
7 | qwen2.5-coder:14b | 9.0 GB |
8 | qwen2.5-coder:7b | 4.7 GB |
9 | qwen2.5:14b | 9.0 GB |
10 | deepseek-r1:14b | 9.0 GB |
11 | deepseek-coder-v2:16b | 8.9 GB |
12 | codellama:13b | 7.4 GB |
13 | gemma2:9b | 5.4 GB |
14 | mistral:7b | 4.4 GB |
15 | llama3.2:3b | 2.0 GB |
16 | phi3:mini | 2.2 GB |
17
18 ## Models to Try Next
19
20 ### Heavy Hitters (best quality, needs more VRAM)
21
22 | Model | Size | Why |
23 |-------|------|-----|
24 | `qwen2.5-coder:32b` | ~20GB | Best open coding model, rivals GPT-4 on benchmarks |
25 | `deepseek-r1:32b` | ~20GB | Larger reasoning model, even better multi-step logic |
26 | `codestral:22b` | ~13GB | Mistral's dedicated coding model, excellent tool use |
27 | `llama3.3:70b` | ~40GB | Meta's flagship, state-of-the-art instruction following |
28
29 ### Mid-Size Sweet Spot
30
31 | Model | Size | Why |
32 |-------|------|-----|
33 | `starcoder2:15b` | ~9GB | BigCode's latest, trained on massive code corpus |
34 | `granite-code:20b` | ~12GB | IBM's code model, strong at enterprise patterns |
35 | `yi-coder:9b` | ~5.5GB | 01.AI's coding model, great at code completion |
36 | `phi4:14b` | ~8GB | Microsoft's latest, punches above its weight |
37
38 ### Lightweight Speed Demons
39
40 | Model | Size | Why |
41 |-------|------|-----|
42 | `llama3.3:latest` | ~4.5GB | Latest Llama with improved instruction following |
43 | `qwen2.5-coder:3b` | ~2GB | Tiny but surprisingly capable for quick tasks |
44 | `deepseek-r1:7b` | ~4.7GB | Reasoning in a smaller package |
45 | `codegemma:7b` | ~5GB | Google's code-specific Gemma variant |
46
47 ## Pull Commands
48
49 ```bash
50 # Heavy hitters (if you have the VRAM)
51 ollama pull qwen2.5-coder:32b
52 ollama pull deepseek-r1:32b
53 ollama pull codestral:22b
54
55 # Mid-size (recommended next pulls)
56 ollama pull starcoder2:15b
57 ollama pull granite-code:20b
58 ollama pull yi-coder:9b
59 ollama pull phi4:14b
60
61 # Lightweight
62 ollama pull llama3.3
63 ollama pull qwen2.5-coder:3b
64 ollama pull deepseek-r1:7b
65 ollama pull codegemma:7b
66 ```