| 1 |
{# |
| 2 |
SmolLM3 DLM-supported reference template. Upstream ships a larger Jinja |
| 3 |
template with date/tool injection and `/think` flag parsing; DLM only |
| 4 |
needs the stable chat subset for export parity: |
| 5 |
|
| 6 |
- a single system turn |
| 7 |
- user / assistant messages |
| 8 |
- a trailing assistant generation prompt |
| 9 |
|
| 10 |
When no explicit system message is present, we pin SmolLM3's |
| 11 |
reasoning-first default brief so exported prompts keep the model's |
| 12 |
intended "think first, answer second" posture. |
| 13 |
#} |
| 14 |
{%- set default_system -%} |
| 15 |
You are a helpful AI assistant named SmolLM, trained by Hugging Face. |
| 16 |
|
| 17 |
Think through the problem carefully before answering. Return two |
| 18 |
sections named Thought and Solution. Keep the final solution concise and |
| 19 |
accurate. |
| 20 |
{%- endset -%} |
| 21 |
{%- if messages and messages[0]['role'] == 'system' -%} |
| 22 |
<|im_start|>system |
| 23 |
{{ messages[0]['content'] }}<|im_end|> |
| 24 |
{%- else -%} |
| 25 |
<|im_start|>system |
| 26 |
{{ default_system }}<|im_end|> |
| 27 |
{%- endif -%} |
| 28 |
{%- for message in messages -%} |
| 29 |
{%- if message['role'] != 'system' -%} |
| 30 |
<|im_start|>{{ message['role'] }} |
| 31 |
{{ message['content'] }}<|im_end|> |
| 32 |
{% endif -%} |
| 33 |
{%- endfor -%} |
| 34 |
{%- if add_generation_prompt -%} |
| 35 |
<|im_start|>assistant |
| 36 |
{%- endif -%} |