This is the official implementation of the paper Exploring Visual Prompts for Adapting Large-Scale Models.
Clone this repo:
git clone https://github.com/hjbahng/visual_prompting.git
cd visual_promptingThis code requires python 3+. Install dependencies by:
pip install -r requirements.txtPrepare the pre-trained models:
bash models/download_models.sh- Train a visual prompt:
python main_clip.py --dataset cifar100 --root [path_to_cifar100] - Test the visual prompt:
python main_clip.py --evaluate --resume /path/to/checkpoints/model_best.pth.tar --dataset cifar100 --root [path_to_cifar100]- Train a visual prompt:
python main_vision.py --model bit_m_rn50 --dataset cifar100 --root [path_to_cifar100]- Test the visual prompt:
python main_vision.py --evaluate --resume /path/to/checkpoints/model_best.pth.tar --model bit_m_rn50 --dataset cifar100 --root [path_to_cifar100]- There are three model choices:
rn50,instagram_resnext101_32x8d, andbit_m_rn50. - Note that we use
--batch_size 32forinstagram_resnext101_32x8dand--batch_size 128for other models.
If you use this code for your research, please cite our paper.
@article{bahng2022visual,
title={Exploring Visual Prompts for Adapting Large-Scale Models},
author={Hyojin Bahng and Ali Jahanian and Swami Sankaranarayanan and Phillip Isola},
journal={arXiv preprint arXiv:2203.17274},
year={2022}
}
