Skip to content

LTX2 video LoRA training

Train a Lightricks LTX video LoRA on a small set of source video clips using AI Toolkit. The output LoRA is usable in LTX2 video generation.

ecosystemBaseBuzz / epochNotes
ltx2Lightricks/LTX-2 (19B)variable (formula-based)Original LTX2. Cost is computed per-step from clip count + duration.
ltx23Lightricks/LTX-2.3 (22B)200 (flat)Newer LTX 2.3. Higher per-epoch cost reflects the heavier model — kept high deliberately to disincentivize very long runs.

The base checkpoint is fixed by ecosystem; there's no model field on the input.

Long-running step

Video training is the slowest training mode on the platform. LTX 2.3 in particular is expensive — keep epochs ≤ 3 unless you have a clear reason. Always use wait=0 and follow up via webhook or polling.

The request shape

json
{
  "$type": "training",
  "input": {
    "engine":    "ai-toolkit",
    "ecosystem": "ltx2"          // ltx2 | ltx23
  }
}

Prerequisites

  • A Civitai orchestration token (Quick start → Prerequisites)
  • A training-data zip containing source video clips
  • An accurate count of clips in the zip

LTX2

Original 19B-parameter LTX video model. resolution: 768 is the typical training resolution.

http
POST https://orchestration.civitai.com/v2/consumer/workflows?wait=0
Authorization: Bearer <your-token>
Content-Type: application/json

{
  "tags": ["training", "video"],
  "steps": [{
    "$type": "training",
    "priority": "normal",
    "retries": 2,
    "input": {
      "engine": "ai-toolkit",
      "ecosystem": "ltx2",
      "epochs": 2,
      "resolution": 768,
      "lr": 0.0002,
      "trainTextEncoder": false,
      "lrScheduler": "cosine",
      "optimizerType": "adamw8bit",
      "networkDim": 32,
      "networkAlpha": 32,
      "trainingData": {
        "type": "zip",
        "sourceUrl": "https://civitai-delivery-worker-prod.5ac0637cfd0766c97916cefa3764fbdf.r2.cloudflarestorage.com/training-images/4470934/2725414TrainingData.nuB3.zip",
        "count": 4
      },
      "samples": { "prompts": ["a video of TOK", "TOK moving in a garden"] }
    }
  }]
}
POST/v2/consumer/workflows
Set your Civitai API token via the Token button in the navbar to enable Try It.
Request body — edit to customize (e.g. swap the image URL or prompt)
Valid JSON

LTX 2.3

Newer 22B model. Same shape as LTX2; lr is typically lower and the per-epoch cost is materially higher (200 Buzz / epoch vs. ltx2's variable formula-based cost).

http
POST https://orchestration.civitai.com/v2/consumer/workflows?wait=0
Authorization: Bearer <your-token>
Content-Type: application/json

{
  "tags": ["training", "video"],
  "steps": [{
    "$type": "training",
    "priority": "normal",
    "retries": 2,
    "input": {
      "engine": "ai-toolkit",
      "ecosystem": "ltx23",
      "epochs": 2,
      "lr": 0.0001,
      "trainTextEncoder": false,
      "lrScheduler": "cosine",
      "optimizerType": "adamw8bit",
      "networkDim": 32,
      "networkAlpha": 32,
      "trainingData": {
        "type": "zip",
        "sourceUrl": "https://civitai-delivery-worker-prod.5ac0637cfd0766c97916cefa3764fbdf.r2.cloudflarestorage.com/training-images/4470934/2725414TrainingData.nuB3.zip",
        "count": 4
      },
      "samples": { "prompts": ["a video of TOK", "TOK moving in a garden"] }
    }
  }]
}
POST/v2/consumer/workflows
Set your Civitai API token via the Token button in the navbar to enable Try It.
Request body — edit to customize (e.g. swap the image URL or prompt)
Valid JSON

Common parameters

Defaults shown are the post-ApplyDefaults values for both LTX ecosystems.

FieldRequiredDefaultNotes
engineAlways ai-toolkit.
ecosystemltx2 or ltx23.
epochs5120. Billed per epoch. Keep low (2–3) for video.
numberOfRepeats(no auto-default)15000.
lr0.0001LTX2 examples often use 0.0002; LTX 2.3 typically 0.0001.
trainTextEncoderfalseLeave off — LTX text encoder is not retrained by AI Toolkit.
lrSchedulercosineconstant, constant_with_warmup, cosine, linear, step.
optimizerTypeadamw8bitSee SDXL/SD1 page for full enum.
networkDim321256.
networkAlphamatches networkDim1256.
noiseOffset001.
flipAugmentationfalseRandom horizontal flips.
shuffleTokens / keepTokensfalse / 0Caption-tag shuffling.
triggerWord(none)Activation token.
trainingData.{type, sourceUrl, count}type: "zip". Zip should contain video clips.
samples.prompts[][]Per-epoch preview videos.
samples.negativePrompt(none)

Reading the result

Same envelope as the other training recipes — see SDXL/SD1 → Reading the result. Each epoch yields a video LoRA .safetensors blob plus any sample .mp4 files. Use the trained LoRA in LTX2 video generation by referencing it in the workflow's loras field.

Runtime

Per-epoch wall time, default settings on a 4-clip dataset:

EcosystemPer-epochTypical full run
ltx2~3–8 min6–16 min for 2 epochs
ltx23~5–12 min10–25 min for 2 epochs

Always use wait=0.

Cost

LTX2 uses a formula-based cost (per-step area + clip count); LTX 2.3 is flat at 200 Buzz / epoch.

ltx2:  total = epochs × computed_cost   (formula varies with clip count + duration)
ltx23: total = 200 × epochs
ConfigurationBuzz (training only)
LTX2, epochs: 2, 4 clips~10–40 (depends on clip duration) + samples
LTX 2.3, epochs: 2400 + samples
LTX 2.3, epochs: 51000 + samples

Sample-prompt rendering uses LTX2 video-generation rates and is billed separately. Run with whatif=true to see the exact pre-flight charge.

Troubleshooting

SymptomLikely causeFix
400 with "trainingData.sourceUrl not reachable"Signed URL expired, or zip behind authRegenerate the URL. R2 signed URLs default to 24h.
Step failed with VRAM-related errorResolution × clip length too highLower resolution (e.g. to 512), shorten clips.
LTX 2.3 cost surprises youFlat 200 Buzz / epoch, by designCheck whatif=true before submitting. Cap epochs at 2–3 unless you have budget.
Trained LoRA produces no motionToo few epochs / static reference clipsRaise epochs, ensure clips show the motion you want learned.
Step failed, moderationStatus: "Rejected"Dataset failed content moderationReplace flagged clips.

Civitai Developer Documentation