In this work, we propose a novel method (GLOV) enabling Large Language Models (LLMs) to act as implicit Optimizers for Vision-Language Models (VLMs) to enhance downstream vision tasks. Our GLOV meta-prompts an LLM with the downstream task description, querying it for suitable VLM prompts (e.g., for zero-shot classification with CLIP). These prompts are ranked according to their fitness for the downstream vision task. In each respective optimization step, the ranked prompts are fed as in-context examples (with their accuracies) to equip the LLM with the knowledge of the type of prompts preferred by the downstream VLM. Furthermore, we also explicitly steer the LLM generation in each optimization step by specifically adding an offset difference vector of the embeddings from the positive and negative solutions found by the LLM, in previous optimization steps, to the intermediate layer of the network for the next generation step. This offset vector steers the LLM generation toward the type of language preferred by the downstream VLM, resulting in enhanced performance on the downstream vision tasks. We comprehensively evaluate our GLOV on 16 diverse datasets using two families of VLMs, i.e., dual-encoder (e.g., CLIP) and encoder-decoder (e.g., LLaVa) models -- showing that the discovered solutions can enhance the recognition performance by up to 15.0% and 57.5% (3.8% and 21.6% on average) for these models.
@article{mirza2024glov,
author = {Mirza, M. Jehanzeb and Zhao, Mengjie and Mao, Zhuoyuan and Doveh, Sivan and Lin, Wei and Gavrikov, Paul and Dorkenwald, Michael and Yang, Shiqi and Jha, Saurav and Wakaki, Hiromi and Mitsufuji, Yuki and Possegger, Horst and Feris, Rogerio and Karlinsky, Leonid and Glass, James},
journal = {ArXiv},
title = {GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models},
year = {2024}
}