tenseleyflow/dlm-vsc / f2f0e3b

Browse files

Add launch.json, tasks.json, test fixture, rebuild

Authored by mfwolffe <wolffemf@dukes.jmu.edu>
SHA
f2f0e3b42bfd3f149198979dfa049e457e8a675a
Parents
3a411ea
Tree
c7f2be4

3 changed files

StatusFile+-
A .vscode/launch.json 15 0
A .vscode/tasks.json 11 0
A test/fixtures/sample.dlm 38 0
.vscode/launch.jsonadded
@@ -0,0 +1,15 @@
1
+{
2
+  "version": "0.2.0",
3
+  "configurations": [
4
+    {
5
+      "name": "Run Extension",
6
+      "type": "extensionHost",
7
+      "request": "launch",
8
+      "args": [
9
+        "--extensionDevelopmentPath=${workspaceFolder}"
10
+      ],
11
+      "outFiles": ["${workspaceFolder}/out/**/*.js"],
12
+      "preLaunchTask": "npm: build"
13
+    }
14
+  ]
15
+}
.vscode/tasks.jsonadded
@@ -0,0 +1,11 @@
1
+{
2
+  "version": "2.0.0",
3
+  "tasks": [
4
+    {
5
+      "type": "npm",
6
+      "script": "build",
7
+      "group": "build",
8
+      "problemMatcher": []
9
+    }
10
+  ]
11
+}
test/fixtures/sample.dlmadded
@@ -0,0 +1,38 @@
1
+---
2
+dlm_id: 01KPQ9M3000000000000000000
3
+dlm_version: 15
4
+base_model: smollm2-135m
5
+training:
6
+  adapter: lora
7
+  lora_r: 16
8
+  learning_rate: 2e-4
9
+  num_epochs: 3
10
+---
11
+# Sample Document
12
+
13
+This is a sample DLM document for testing the VSCode extension.
14
+
15
+::instruction::
16
+### Q
17
+What is a Document Language Model?
18
+
19
+### A
20
+A .dlm file is a single UTF-8 text file that becomes a local, reproducible,
21
+trainable LLM. Edit the document, retrain, share.
22
+
23
+::instruction::
24
+### Q
25
+How do you train a DLM?
26
+
27
+### A
28
+Run `dlm train your-file.dlm` and the adapter trains on the document content.
29
+
30
+::preference::
31
+### Prompt
32
+Explain LoRA in one sentence.
33
+
34
+### Chosen
35
+LoRA adds small trainable matrices to frozen model layers, enabling efficient fine-tuning.
36
+
37
+### Rejected
38
+LoRA is a method for training language models that involves modifying the architecture of the model by introducing additional parameters in the form of low-rank decomposition matrices that are applied to the attention weight matrices, which allows for parameter-efficient fine-tuning while keeping the original pre-trained weights frozen.