Compatibility Matrix#
The table below shows every supported model and the optimizations supported for them.
The symbols used have the following meanings:
✅ = Full compatibility
❌ = No compatibility
⭕ = Does not apply to this model
Models x Optimization#
The HuggingFace Model ID can be passed directly to from_pretrained() methods, and sglang-diffusion will use the
optimal
default parameters when initializing and generating videos.
Video Generation Models#
Model Name |
Hugging Face Model ID |
Resolutions |
TeaCache |
Sliding Tile Attn |
Sage Attn |
Video Sparse Attention (VSA) |
Sparse Linear Attention (SLA) |
Sage Sparse Linear Attention (SageSLA) |
Sparse Video Gen 2 (SVG2) |
|---|---|---|---|---|---|---|---|---|---|
FastWan2.1 T2V 1.3B |
|
480p |
⭕ |
⭕ |
⭕ |
✅ |
❌ |
❌ |
❌ |
FastWan2.2 TI2V 5B Full Attn |
|
720p |
⭕ |
⭕ |
⭕ |
✅ |
❌ |
❌ |
❌ |
Wan2.2 TI2V 5B |
|
720p |
⭕ |
⭕ |
✅ |
⭕ |
❌ |
❌ |
❌ |
Wan2.2 T2V A14B |
|
480p |
❌ |
❌ |
✅ |
⭕ |
❌ |
❌ |
❌ |
Wan2.2 I2V A14B |
|
480p |
❌ |
❌ |
✅ |
⭕ |
❌ |
❌ |
❌ |
HunyuanVideo |
|
720×1280 |
❌ |
✅ |
✅ |
⭕ |
❌ |
❌ |
✅ |
FastHunyuan |
|
720×1280 |
❌ |
✅ |
✅ |
⭕ |
❌ |
❌ |
✅ |
Wan2.1 T2V 1.3B |
|
480p |
✅ |
✅ |
✅ |
⭕ |
❌ |
❌ |
✅ |
Wan2.1 T2V 14B |
|
480p, 720p |
✅ |
✅ |
✅ |
⭕ |
❌ |
❌ |
✅ |
Wan2.1 I2V 480P |
|
480p |
✅ |
✅ |
✅ |
⭕ |
❌ |
❌ |
✅ |
Wan2.1 I2V 720P |
|
720p |
✅ |
✅ |
✅ |
⭕ |
❌ |
❌ |
✅ |
TurboWan2.1 T2V 1.3B |
|
480p |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
⭕ |
TurboWan2.1 T2V 14B |
|
480p |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
⭕ |
TurboWan2.1 T2V 14B 720P |
|
720p |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
⭕ |
TurboWan2.2 I2V A14B |
|
720p |
✅ |
❌ |
❌ |
❌ |
✅ |
✅ |
⭕ |
Wan2.1 Fun 1.3B InP |
|
480p |
✅ |
✅ |
✅ |
⭕ |
❌ |
❌ |
✅ |
Helios Base |
|
720p |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
Helios Mid |
|
720p |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
Helios Distilled |
|
720p |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
LTX-2 (one/two-stage/TI2V) |
|
768×512 |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
LTX-2.3 (one/two-stage/TI2V/HQ) |
|
768×512 |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
Note:
Wan2.2 TI2V 5B has some quality issues when performing I2V generation. We are working on fixing this issue.
SageSLA is based on SpargeAttn. Install it first with
pip install git+https://github.com/thu-ml/SpargeAttn.git --no-build-isolationLTX pipeline selection:
One-stage:
--pipeline-class-name LTX2PipelineTwo-stage:
--pipeline-class-name LTX2TwoStagePipelineTwo-stage HQ:
--pipeline-class-name LTX2TwoStageHQPipeline(HQ defaults to 1920×1088; you can still override--width/--height)LTX-2 and LTX-2.3 support both T2V and TI2V (
--image-path) on one-stage and two-stage pipelines (including HQ).The spatial upsampler and distilled LoRA are auto-resolved from the model snapshot by default, and can still be overridden with
--spatial-upsampler-pathand--distilled-lora-path.For LTX models, the
Resolutionscolumn uses output videowidth×heightsemantics, matchingsglang generate --width ... --height ....
LTX-2 / LTX-2.3 two-stage also supports
--ltx2-two-stage-device-mode {original,snapshot,resident}:snapshotis the default and recommended mode.residentusually provides the best latency/throughput but uses much more VRAM.originalkeeps official two-stage semantics without the premerged stage-2 transformer path.Example (one prior run):
original154.67s,snapshot114.05s,resident75.71s; peak VRAM trend isoriginal < snapshot < resident.
Image Generation Models#
Model Name |
HuggingFace Model ID |
|---|---|
FLUX.1-dev |
|
FLUX.2-dev |
|
FLUX.2-dev-NVFP4 |
|
FLUX.2-Klein-4B |
|
FLUX.2-Klein-9B |
|
Z-Image |
|
Z-Image-Turbo |
|
GLM-Image |
|
Qwen Image |
|
Qwen Image 2512 |
|
Qwen Image Edit |
|
Qwen Image Edit 2509 |
|
Qwen Image Edit 2511 |
|
Qwen Image Layered |
|
SD3 Medium |
|
SD3.5 Medium |
|
SD3.5 Large |
|
Hunyuan3D-2 |
|
SANA 1.5 1.6B |
|
SANA 1.5 4.8B |
|
SANA 1600M 1024px |
|
SANA 600M 1024px |
|
SANA 1600M 512px |
|
SANA 600M 512px |
|
FireRed-Image-Edit 1.0 |
|
FireRed-Image-Edit 1.1 |
|
ERNIE-Image |
|
ERNIE-Image-Turbo |
|
Supported Components#
SGLang Diffusion supports overriding individual pipeline components with
--<component>-path. The value can be either a Hugging Face repo ID or a local
component directory.
The same overrides can also be provided in config files through
component_paths.<component>.
Common Syntax#
CLI:
sglang generate \
--model-path black-forest-labs/FLUX.2-dev \
--vae-path black-forest-labs/FLUX.2-small-decoder \
--transformer-path /models/flux2/transformer
Config file:
model_path: black-forest-labs/FLUX.2-dev
component_paths:
vae: black-forest-labs/FLUX.2-small-decoder
transformer: /models/flux2/transformer
Use the component name from the pipeline’s model_index.json or the native pipeline’s registered module name:
Component Type |
Supported Keys |
Notes |
|---|---|---|
VAE |
|
|
Transformer / DiT |
|
|
Text / Preprocess |
|
Replacement encoders often need matching preprocessing assets |
Auxiliary |
|
Only valid for pipelines that expose these components |
Known Component Repos#
The table below lists concrete Hugging Face component repos that are already used in SGLang Diffusion docs or tests. It is not an exhaustive catalog of all compatible component repos.
Base Model |
Override Key |
Example Repo |
Notes |
|---|---|---|---|
|
|
|
Decoder-only FLUX.2 VAE override |
|
|
|
Existing tested custom VAE path |
VAE#
--vae-pathis the common image-generation override.--video-vae-pathand--audio-vae-pathare only relevant for pipelines with separate video or audio VAEs.
Transformer / DiT#
--transformer-pathis the standard override for the main denoising transformer.For quantized transformers, prefer
--transformer-pathor--transformer-weights-path; seequantization.md.--video-dit-pathand--audio-dit-pathare only for pipelines that split denoisers by modality.
Text Encoders and Preprocessors#
--text-encoder-pathand--text-encoder-2-pathoverride primary and secondary text encoders.--tokenizer-path,--processor-path, and--image-processor-pathare useful when the replacement encoder requires matching preprocessing assets.
Auxiliary Components#
--scheduler-pathis only relevant when the pipeline exposes a scheduler component.--spatial-upsampler-pathis mainly for two-stage pipelines such asLTX2TwoStagePipeline.--vocoder-path,--connectors-path,--dual-tower-bridge-path,--image-encoder-path, and--vision-language-encoder-pathare only valid for pipelines that expose those components.
Notes#
Component overrides are only valid when the target pipeline actually uses that component.
The override key should match the component name in the pipeline’s
model_index.jsonor the native pipeline’s registered module name.
Verified LoRA Examples#
This section lists example LoRAs that have been explicitly tested and verified with each base model in the SGLang Diffusion pipeline.
Important: LoRAs that are not listed here are not necessarily incompatible. In practice, most standard LoRAs are expected to work, especially those following common Diffusers or SD-style conventions. The entries below simply reflect configurations that have been manually validated by the SGLang team.
Verified LoRAs by Base Model#
Base Model |
Supported LoRAs |
|---|---|
Wan2.2 |
|
Wan2.1 |
|
Z-Image-Turbo |
|
Qwen-Image |
|
Qwen-Image-Edit |
|
Flux |
|
Special requirements#
Sliding Tile Attention#
Currently, only Hopper GPUs (H100s) are supported.