Installation and Usage Guide
This guide will help you get started with Z-Image Turbo. There are multiple ways to use the model, from simple online demos to local installations for full control.
Quick Start Options
Choose the method that best fits your needs and technical expertise.
Option 1: Online Demo (Easiest)
The fastest way to try Z-Image Turbo is through the online demo. No installation required.
- Visit the demo page on this website
- Enter your text prompt in the input field
- Adjust settings like resolution, steps, and seed
- Click Generate to create your image
- Download or share your generated image
Pros: No setup, instant access, works on any device
Cons: Requires internet connection, shared resources
Option 2: HuggingFace Integration
Use Z-Image Turbo through HuggingFace for API access and integration with your projects.
Using the HuggingFace Web Interface
- Visit HuggingFace model page: Tongyi-MAI/Z-Image-Turbo
- Click on the "Hosted inference API" section
- Enter your prompt in the text field
- Click "Compute" to generate your image
- View and download the result
Using the HuggingFace Python API
First, install the required packages:
pip install huggingface-hub diffusers torch transformersThen use this Python code:
from diffusers import DiffusionPipeline
import torch
# Load the model
pipe = DiffusionPipeline.from_pretrained(
"Tongyi-MAI/Z-Image-Turbo",
torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
# Generate an image
prompt = "A beautiful sunset over mountains"
image = pipe(
prompt,
num_inference_steps=8,
guidance_scale=3.0
).images[0]
# Save the image
image.save("output.png")Pros: API access, easy integration, well-documented
Cons: Requires HuggingFace account for API, rate limits may apply
Option 3: ModelScope Platform
ModelScope is another platform where Z-Image Turbo is available, particularly popular in China.
Using ModelScope
- Visit ModelScope: modelscope.cn/models/Tongyi-MAI/Z-Image-Turbo
- Create a ModelScope account if you don't have one
- Use the online interface or download the model
- Follow the documentation provided on the platform
ModelScope Python SDK
Install the ModelScope library:
pip install modelscopeUse the model:
from modelscope import pipeline
# Load the model
pipe = pipeline(
'text-to-image-synthesis',
model='Tongyi-MAI/Z-Image-Turbo'
)
# Generate an image
result = pipe({'text': 'A beautiful landscape'})
result['output_imgs'][0].save('output.png')Pros: Good for Chinese users, integrated ecosystem
Cons: Less familiar to international users
Option 4: Local Installation (Advanced)
For full control and offline usage, install Z-Image Turbo locally.
System Requirements
- GPU with at least 16GB VRAM (NVIDIA recommended)
- Python 3.8 or higher
- CUDA 11.7 or higher (for NVIDIA GPUs)
- At least 20GB of free disk space
- Linux, Windows, or macOS operating system
Installation Steps
Step 1: Set up Python environment
# Create a virtual environment python -m venv zimage-env # Activate the environment # On Linux/Mac: source zimage-env/bin/activate # On Windows: zimage-env\Scripts\activate
Step 2: Install PyTorch with CUDA support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118Step 3: Install required packages
pip install diffusers transformers accelerate safetensors pip install xformers # Optional, for better performance
Step 4: Clone the repository (optional)
git clone https://github.com/Tongyi-MAI/Z-Image.git cd Z-Image
Step 5: Download the model weights
The model will be downloaded automatically on first use, or you can pre-download it:
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="Tongyi-MAI/Z-Image-Turbo",
local_dir="./models/z-image-turbo"
)Pros: Full control, offline usage, no rate limits, best performance
Cons: Requires powerful hardware, more complex setup
Basic Usage Examples
Simple Image Generation
from diffusers import DiffusionPipeline
import torch
# Load model
pipe = DiffusionPipeline.from_pretrained(
"Tongyi-MAI/Z-Image-Turbo",
torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
# Generate image
prompt = "A serene lake surrounded by mountains at sunset"
image = pipe(prompt, num_inference_steps=8).images[0]
image.save("lake_sunset.png")Advanced Generation with Parameters
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"Tongyi-MAI/Z-Image-Turbo",
torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
# Generate with custom parameters
prompt = "A modern coffee shop with 'Fresh Coffee' sign"
image = pipe(
prompt,
num_inference_steps=8,
guidance_scale=3.0,
height=1024,
width=1024,
generator=torch.Generator("cuda").manual_seed(42)
).images[0]
image.save("coffee_shop.png")Batch Generation
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"Tongyi-MAI/Z-Image-Turbo",
torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
prompts = [
"A mountain landscape",
"A city skyline at night",
"A tropical beach"
]
for i, prompt in enumerate(prompts):
image = pipe(prompt, num_inference_steps=8).images[0]
image.save(f"image_{i}.png")
print(f"Generated image {i+1}/{len(prompts)}")Parameter Reference
| Parameter | Description | Default | Range |
|---|---|---|---|
| prompt | Text description of desired image | Required | Any text |
| num_inference_steps | Number of denoising steps | 8 | 1-100 |
| guidance_scale | How closely to follow the prompt | 3.0 | 1.0-10.0 |
| height | Image height in pixels | 1024 | 512-2048 |
| width | Image width in pixels | 1024 | 512-2048 |
| generator | Random seed for reproducibility | Random | Any integer |
Troubleshooting
Out of Memory Error
Problem: GPU runs out of memory during generation
Solution:
- Reduce image resolution
- Use torch.float16 instead of float32
- Enable memory-efficient attention with xformers
- Close other GPU-intensive applications
Slow Generation Speed
Problem: Image generation takes too long
Solution:
- Install xformers for optimized attention
- Use torch.compile() for faster inference
- Ensure CUDA is properly installed
- Use the recommended 8 steps instead of more
Poor Image Quality
Problem: Generated images don't match expectations
Solution:
- Write more detailed and specific prompts
- Adjust guidance_scale (try 3.0-7.0)
- Experiment with different seeds
- Ensure you're using 8 inference steps
Installation Issues
Problem: Errors during installation
Solution:
- Verify Python version (3.8 or higher)
- Check CUDA compatibility with your GPU
- Install packages one at a time to identify issues
- Use a fresh virtual environment
Additional Resources
- GitHub Repository: github.com/Tongyi-MAI/Z-Image
- HuggingFace Model: huggingface.co/Tongyi-MAI/Z-Image-Turbo
- ModelScope: modelscope.cn/models/Tongyi-MAI/Z-Image-Turbo
- Official Homepage: tongyi-mai.github.io/Z-Image-homepage/
Note: This installation guide is based on publicly available information. For the most up-to-date instructions, please refer to the official Z-Image documentation.