This repository is the official implementation of "Image Understanding Makes for A Good Tokenizer for Image Generation".
Image understanding (IU) and image generation (IG) have long been central to computer vision research. While many studies explore how IG models can aid IU, few investigate the reverse—using IU models to enhance IG.
This work bridges the gap by introducing IU-based tokenizers in the AutoRegressive (AR) IG framework. Specifically, we evaluate the following tokenizers:
The VQ-KD and Cluster tokenizers leverage pretrained models such as CLIP, delivering superior results compared to traditional tokenizers. The following sections provice detailed instructions for training and validating these tokenizers.
Please follow data.md and installation.md to prepare the data and environment.
Use pretrained_models.md to download the pretrained models.
Generate the FID cache as described in metrics.md.
Please refer to training.md and validation.md for detailed instructions on training and validating the tokenizers. The model card is available in model_card.md.
This project draws inspiration from the following works:
For a full list of influential works, please refer to our paper.