Select Model
Pick a pre-trained checkpoint to fine-tune.
Method
Full parameter or LoRA.
Training Data
Upload a CSV or fall back to finetuning.csv.
Hyperparameters
Optimize fine-tuning behavior.
Batch Size
4
Learning Rate
0.00001
Max Length
512
Understand
Masked loss and optional LoRA math.
Train
Train your model.
Demo mode: fine-tuning disabled.
Live Metrics
Loss and gradients.
Progress
0.0%
Elapsed
-
ETA
-
Loss
-
Running Loss
-
Grad Norm
-
Inspect Batch
Prompt vs response tokens and attention patterns.
Select a sample to inspect prompt/response tokens.
Query ↓Key →
Eval History
Train vs validation loss.
Logs
Checkpoint events.
No logs yet.