Skip to content

Flux 1 LoRA training

Train a Flux.1 LoRA on your own image dataset using AI Toolkit. The output LoRA is usable directly in Flux 1 image generation (sdcpp or Comfy paths).

modelVariantBase modelInference characteristics
dev (default)black-forest-labs/FLUX.1-devHigher fidelity, ~20–28 sampler steps. Good default for most LoRAs.
schnellblack-forest-labs/FLUX.1-schnellFaster inference, 4 sampler steps, no CFG. Use when you specifically want a Schnell-targeted LoRA.

The base checkpoint is fixed by modelVariant — there's no model field to override. To train on a non-BFL Flux.1 finetune, use the SDXL & SD1 or other-image ecosystems instead.

Long-running step

Flux 1 training is the most expensive AI Toolkit ecosystem (200 Buzz/epoch) and runs for ~30s–2min per epoch on a typical 10-image dataset. Always use wait=0 and follow up via polling or a webhook — see Results & webhooks.

The request shape

json
{
  "$type": "training",
  "input": {
    "engine":       "ai-toolkit",
    "ecosystem":    "flux1",
    "modelVariant": "dev"        // dev | schnell
  }
}

Prerequisites

  • A Civitai orchestration token (Quick start → Prerequisites)
  • A training-data zip uploaded to a reachable URL (signed R2 URL, Civitai R2 AIR, or any HTTPS URL)
  • An accurate count of images in the zip

Flux 1 dev (default)

Trains on top of FLUX.1-dev and produces a LoRA usable with any Flux 1 dev workflow.

http
POST https://orchestration.civitai.com/v2/consumer/workflows?wait=0
Authorization: Bearer <your-token>
Content-Type: application/json

{
  "tags": ["training"],
  "steps": [{
    "$type": "training",
    "priority": "normal",
    "retries": 2,
    "input": {
      "engine": "ai-toolkit",
      "ecosystem": "flux1",
      "modelVariant": "dev",
      "epochs": 5,
      "resolution": 1024,
      "lr": 0.0001,
      "trainTextEncoder": false,
      "lrScheduler": "cosine",
      "optimizerType": "adamw8bit",
      "networkDim": 16,
      "networkAlpha": 16,
      "trainingData": {
        "type": "zip",
        "sourceUrl": "urn:air:other:other:civitai-r2:civitai-delivery-worker-prod@training-images/6/2657604TrainingData.EYBd.zip",
        "count": 10
      },
      "samples": {
        "prompts": ["a photo of TOK", "TOK in a garden", "TOK portrait"]
      }
    }
  }]
}
POST/v2/consumer/workflows
Set your Civitai API token via the Token button in the navbar to enable Try It.
Request body — edit to customize (e.g. swap the image URL or prompt)
Valid JSON

Flux 1 schnell

Trains on top of FLUX.1-schnell. Inference uses 4 steps and cfgScale: 0 — the output LoRA is meant to be used in those conditions.

http
POST https://orchestration.civitai.com/v2/consumer/workflows?wait=0
Authorization: Bearer <your-token>
Content-Type: application/json

{
  "tags": ["training"],
  "steps": [{
    "$type": "training",
    "input": {
      "engine": "ai-toolkit",
      "ecosystem": "flux1",
      "modelVariant": "schnell",
      "epochs": 5,
      "lr": 0.0001,
      "trainTextEncoder": false,
      "networkDim": 16,
      "networkAlpha": 16,
      "trainingData": {
        "type": "zip",
        "sourceUrl": "urn:air:other:other:civitai-r2:civitai-delivery-worker-prod@training-images/6/2657604TrainingData.EYBd.zip",
        "count": 10
      },
      "samples": { "prompts": ["a photo of TOK", "TOK in a garden"] }
    }
  }]
}
POST/v2/consumer/workflows
Set your Civitai API token via the Token button in the navbar to enable Try It.
Request body — edit to customize (e.g. swap the image URL or prompt)
Valid JSON

Common parameters

Shared by both Flux 1 variants. Defaults shown are after ApplyDefaults.

FieldRequiredDefaultNotes
engineAlways ai-toolkit.
ecosystemAlways flux1 for this page.
modelVariantdev or schnell. Determines the base checkpoint.
epochs5120. Billed per epoch.
numberOfRepeatsauto: ceil(200 / count)15000.
lr0.0001UNet learning rate. Flux 1 is sensitive to high LRs — keep ≤ 0.0005.
trainTextEncoderfalseFlux 1 does not benefit much from text-encoder training. Leave off.
lrSchedulercosineconstant, constant_with_warmup, cosine, linear, step.
optimizerTypeadamw8bitadamw, adamw8bit, adam8bit, lion, lion8bit, adafactor, adagrad, prodigy, prodigy8bit, automagic.
networkDim161256. Flux 1's lower default reflects how compactly Flux LoRAs encode style/character vs. SD-family.
networkAlphamatches networkDim1256.
noiseOffset001.
flipAugmentationfalseRandom horizontal flips.
shuffleTokens / keepTokensfalse / 0Caption-tag shuffling.
triggerWord(none)Activation token. Recommended for character / style LoRAs.
trainingData.{type, sourceUrl, count}Always type: "zip".
samples.prompts[][]Preview prompts rendered after each epoch using the trained LoRA at strength 1.0.
samples.negativePrompt(none)

Reading the result

Same envelope as the other training recipes — see SDXL/SD1 → Reading the result for the full shape. The relevant bit:

json
{
  "output": {
    "moderationStatus": "Approved",
    "epochs": [
      {
        "epochNumber": 1,
        "model": { "id": "blob_...", "url": "https://.../epoch_1.safetensors" },
        "samples": [{ "id": "blob_...", "url": "https://.../sample_0.jpeg" }]
      }
    ]
  }
}

The model blob is your trained LoRA — download it (URLs are signed and expire), or use the blob URL directly with Flux 1 image generation by referencing its AIR in the loras field.

Runtime

Per-epoch wall time on a 10-image dataset, default settings:

VariantPer-epoch5-epoch full run
dev~60–120 s5–15 min
schnell~60–120 s5–15 min

Always use wait=0.

Cost

total = 200 × epochs   (Buzz)
ConfigurationBuzz
epochs: 51000 + samples
epochs: 102000 + samples
epochs: 20 (max)4000 + samples

Sample-prompt rendering is billed separately at the appropriate Flux 1 generation rate. Run with whatif=true (the Preview cost button on the widgets above) to see the exact pre-flight charge.

Troubleshooting

SymptomLikely causeFix
400 with "modelVariant required"Missing modelVariant fieldSet to "dev" or "schnell".
400 with "epochs out of range"epochs outside 120Cap at 20.
400 with "trainingData.sourceUrl not reachable"Signed URL expiredRegenerate. Prefer Civitai R2 AIRs over signed URLs for long-lived references.
Trained LoRA underbakedToo few epochs for dataset, or lr too lowRaise epochs to 8–12 for character LoRAs; keep lr at 0.00010.0003.
Trained LoRA overfitsToo many epochs / too high networkDimLower epochs, drop networkDim to 8–12.
Step failed, output moderationStatus: "Rejected"Dataset failed content moderationReplace flagged images.

Civitai Developer Documentation