xDiT is an open-source library that accelerates inference for Diffusion Transformer (DiT) models by using parallelism and optimization techniques. These techniques enable a scalable multi-GPU setup for demanding workloads. This page demonstrates how to deploy DiT models by using xDiT and Cloud GPUs on Vertex AI.
For more information about xDiT, see the xDiT GitHub project.
Benefits
The following list describes the key benefits for using xDiT to serve DiT models on Vertex AI:
- Up to three times faster generation: Generate high-resolution images and videos in a fraction of the time compared to other serving solutions.
- Scalable multi-GPU support: Efficiently distribute workloads across
multiple GPUs for optimal performance.
- Hybrid parallelism: xDiT supports various parallel processing approaches, such as unified sequence parallelism, PipeFusion, CFG parallelism, and data parallelism. These methods can be combined in a unique recipe to optimize performance.
- Optimized single-GPU performance: xDiT provides faster inference even on a
single GPU.
- GPU acceleration: xDiT incorporates several kernel acceleration methods and uses techniques from DiTFastAttn to speed up inference on a single GPU.
- Easy deployment: Get started quickly with one-click deployment or Colab Enterprise notebooks in Vertex AI Model Garden.
Supported models
xDiT is available for certain DiT model architectures in Vertex AI Model Garden such as Flux.1 Schnell and CogVideoX-2b. To see if a DiT model supports xDiT in Model Garden, view its model card in Model Garden.
Hybrid parallelism for multi-GPU performance:
xDiT uses a combination of parallelism techniques to maximize performance on multi-GPU setups. These techniques work together to distribute the workload and optimize resource utilization:
- Unified sequence parallelism: This technique splits the input data (such as splitting an image into patches) across multiple GPUs, reducing the memory usage and improving scalability.
- PipeFusion: PipeFusion divides the DiT model into stages and assigns each stage to a different GPU, enabling parallel processing of different parts of the model.
- CFG parallelism: This technique specifically optimizes models by using classifier-free guidance, a common method for controlling the style and content of generated images. It parallelizes the computation of conditional and unconditional branches, leading to faster inference.
- Data Parallelism: This method replicates the entire model on each GPU, with each GPU processing a different batch of input data, increasing the overall throughput of the system.
For more information about performance improvements, see xDiT's report on Flux.1 Schnell or CogVideoX-2b. Google was able to reproduce these results on Vertex AI Model Garden.
Single GPU acceleration
The xDiT library provides benefits for single-GPU serving by using torch.compile and onediff to enhance runtime speed on GPUs. These techniques can also be used in conjunction with hybrid parallelism.
xDiT also has an efficient attention computation technique, called DiTFastAttn, to address DiT's computational bottleneck. For now, this technique is only available to use for single GPU setups or in conjunction with data parallelism.
Get started in Model Garden
The xDiT optimized Cloud GPU serving container is provided within Vertex AI Model Garden. For supported models, deployments use this container when you use one-click deployments or the Colab Enterprise notebook examples.
The following examples use the Flux.1-schnell model to demonstrate how to deploy a DiT model on a xDiT container.
Use one-click deployment
You can deploy a custom Vertex AI endpoint with the xDiT container by using a model card.
Navigate to the model card page and click Deploy.
For the model variation to use, select a machine type to use for your deployment.
Click Deploy to begin the deployment process. You receive two email notifications; one when the model is uploaded and another when the endpoint is ready.
Use the Colab Enterprise notebook
For flexibility and customization, use the Colab Enterprise notebook examples to deploy a Vertex AI endpoint with the xDiT container by using the Vertex AI SDK for Python.
Navigate to the model card page and click Open notebook.
Select the Vertex Serving notebook. The notebook is opened in Colab Enterprise.
Run through the notebook to deploy a model by using the xDiT container and send prediction requests to the endpoint. The code snippet for the deployment is as follows:
XDIT_DOCKER_URI=us-docker.pkg.dev/deeplearning-platform-release/vertex-model-garden/xdit-serve.cu125.0-1.ubuntu2204.py310
serving_env = {
"MODEL_ID": "black-forest-labs/FLUX.1-schnell",
"TASK": "text-to-image",
"DEPLOY_SOURCE": "notebook",
"N_GPUS": "2",
"ULYSSES_DEGREE": "1",
"RING_DEGREE": "2",
"PIPEFUSION_PARALLEL_DEGREE": "1",
"USE_TORCH_COMPILE": "true",
}
model = aiplatform.Model.upload(
display_name=model_name,
serving_container_image_uri=XDIT_DOCKER_URI,
serving_container_ports=[7080],
serving_container_predict_route="/predict",
serving_container_health_route="/health",
serving_container_environment_variables=serving_env,
)
model.deploy(
endpoint=endpoint,
machine_type="a3-highgpu-2g",
accelerator_type="NVIDIA_H100_80GB",
accelerator_count=2,
deploy_request_timeout=1800,
service_account=SERVICE_ACCOUNT,
)
Environment variables
MODEL_ID
: Specifies the ID of the DiT model to deploy, such as 'black-forest-labs/FLUX.1-schnell'.TASK
: Defines the task the model performs, such as 'text-to-image-flux-xdit'.N_GPUS
: Sets the number of GPUs to use for inference.ULYSSES_DEGREE
,RING_DEGREE
,PIPEFUSION_PARALLEL_DEGREE
: Control the parallelism techniques that xDiT uses. For details about each argument, see xDiT arguments.USE_TORCH_COMPILE
: Enables single-GPU acceleration by using torch.compile.
xDiT arguments
xDiT offers a range of server arguments that can be configured to optimize performance for specific use cases. These arguments are set as environment variables during deployment. The following list are the key arguments you might need to configure:
N_GPUS
(integer): Specifies the number of GPUs to use for inference. The default value is1
.ENABLE_TILING
(Boolean): Reduces GPU memory usage by decoding the VAE component one tile at a time. This argument is useful for larger images or videos and to prevent out-of-memory errors. The default value isfalse
.ENABLE_SLICING
(Boolean): Reduces GPU memory usage by splitting the input tensor into slices for VAE decoding. The default value isfalse
.USE_TORCH_COMPILE
(Boolean): Enables single-GPU acceleration through torch.compile, improving compilation speed. The default value isfalse
.PIPEFUSION_PARALLEL_DEGREE
(integer): Sets the degree of parallelism for PipeFusion. Higher values increase parallelism but might require more memory. The default value is1
.WARMUP_STEPS
(integer): If PipeFusion is enabled, this argument specifies the number of warmup steps required before inference begins. The default value is0
.ULYSSES_DEGREE
(integer): Sets the Ulysses degree. The default value is1
.RING_DEGREE
(integer): Sets the Ring degree. The default value is1
.USE_CFG_PARALLEL
(Boolean): Enables parallel computation for classifier-free guidance (CFG), a technique that is used to control the output of DiT models. When enabled, the constant parallelism degree is 2. Set to 'true' when using CFG. The default value isfalse
.USE_PARALLEL_VAE
(Boolean): Enables efficient processing of high-resolution images (greater than 2048 pixels) by parallelizing the VAE component. The default value isfalse
.
For a full list of arguments, see the xFuserArgs
class in the
xDiT GitHub project.