MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?

1UNC-Chapel Hill, 2University of Chicago, 3Stanford University,
4UCSC 5UCSD 6USTC 7ESSEC 8Peking University 9Illinois Tech
10Duke University 11University of Queensland 12Stony Brook University 13NUS
MJ-Bench Team
*Core Contributors.
Teaser image.

We evaluate a large variety of multimodal judges on MJ-Bench dataset. We compare their feedback over four comprehensive perspectives, each decomposed into multiple sub-categories. Additionally, we study the effectiveness of the feedback under different scales and input modes.

Abstract

While text-to-image models like DALLE-3 and Stable Diffusion are rapidly proliferating, they often encounter challenges such as hallucination, bias, and the production of unsafe, low-quality output. To effectively address these issues, it is crucial to align these models with desired behaviors based on feedback from a multimodal judge. Despite their significance, current multimodal judges frequently undergo inadequate evaluation of their capabilities and limitations, potentially leading to misalignment and unsafe fine-tuning outcomes.

To address this issue, we introduce MJ-Bench, a novel benchmark which incorporates a comprehensive preference dataset to evaluate multimodal judges in providing feedback for image generation models across four key perspectives: alignment, safety, image quality, and bias. Specifically, we evaluate a large variety of multimodal judges including smaller-sized CLIP-based scoring models, open-source VLMs (e.g. LLaVA family), and close-source VLMs (e.g. GPT-4o, Claude 3) on each decomposed subcategory of our preference dataset.

Experiments reveal that close-source VLMs generally provide better feedback, with GPT-4o outperforming other judges in average. Compared with open-source VLMs, smaller-sized scoring models can provide better feedback regarding text-image alignment and image quality, while VLMs provide more accurate feedback regarding safety and generation bias due to their stronger reasoning capabilities. Further studies in feedback scale reveal that VLM judges can generally provide more accurate and stable feedback in natural language (Likert-scale) than numerical scales. Notably, human evaluations on end-to-end fine-tuned models using separate feedback from these multimodal judges provide similar conclusions, further confirming the effectiveness of MJ-Bench.

Framework.

Overview of the proposed MJ-Bench dataset. To comprehensively evaluate the judge feedback provided by multimodal reward models for image generation, our preference dataset is structured around four key dimensions: text-image alignment, safety, image quality and artifacts, bias and fairness. Each dimension is thoroughly represented through various sub-scenarios that include distinct comparison pairs. These pairs are carefully chosen to highlight subtle, yet verifiable reasons such as incorrect facts, compromised quality, and unsafe implications that justify the preference.

Leaderboard

We compare over 22 multimodal judges on MJ-Bench dataset. The leaderboard is updated daily to reflect the latest performance of each model. The leaderboard is based on the average accuracy (%) with and without ties for alignment, safety, and artifact. We evaluate preference biases over three metrics, i.e. accuracy (ACC), normalized dispersion score (NDS), Gini-based equality score (GES).


Evaluation of three types of multimodal judges across four perspectives on MJ-Bench dataset. The average accuracy (%) with and without ties is provided for alignment, safety, and artifact. We evaluate preference biases over three metrics, i.e. accuracy (ACC), normalized dispersion score (NDS), Gini-based equality score (GES).

Human evaluation result on the generated images from six fine-tuned SD-v1.5 model using the feedback from six multimodal judges, i.e. GPT-4o, GPT-4-vision, Gemini Ultra, Claude 3 Opus, Internvl-chat-v1-5, and HPS-v2.1. Specifically, we consider the following four metrics: ranking over fixed seed (FR), ranking over random seed (RR), average ranking (AR), and average voting (AV).

The detailed evaluation result of all multimodal judges on alignment perspective. The feedback are provided in numerical scale of range [0, 10]. Specifically, we study their individual performance over five alignment objectives: object (existence), attribute, action, location, and count.

The detailed evaluation result of all multimodal judges on safety perspective. The feedback are provided in numerical scale of range [0, 10]. Specifically, we study their individual performance over two safety objectives: toxicity (crime, shocking, and disgust) and NSFW (evident, evasive, and subtle).

The detailed evaluation result of all multimodal judges on quality perspective. The feedback is provided in numerical scale of range [0, 10]. Specifically, we study their individual performance over two quality objectives: distortion (including human face, human limb, and object), and blurry (including defocused and motion).

The detailed evaluation result of all multimodal judges on bias perspective. The feedback are provided in different scales including numerical scales ([0-5], and [0-10]) and Likert scale: [Extremely Poor, Poor, Average, Good, Outstanding]. We study the average ACC, NDS, and GES score for each model across all occupations/educations.


Please refer to our Huggingface Leaderboard to add your model to the leaderboard.

Data

You can directly download our data from Huggingface datasets. For guidance on how to access and utilize the data, please consult our instructions on Github.

BibTeX

@misc{chen2024mjbenchmultimodalrewardmodel,
      title={MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?}, 
      author={Zhaorun Chen and Yichao Du and Zichen Wen and Yiyang Zhou and Chenhang Cui and Zhenzhen Weng and Haoqin Tu and Chaoqi Wang and Zhengwei Tong and Qinglan Huang and Canyu Chen and Qinghao Ye and Zhihong Zhu and Yuqing Zhang and Jiawei Zhou and Zhuokai Zhao and Rafael Rafailov and Chelsea Finn and Huaxiu Yao},
      year={2024},
      eprint={2407.04842},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2407.04842},
    }

Contact Us

If you have any inquiries about MJ-Bench, feel free to reach out to us at mjbenchofficial@gmail.com or raise an issue on Github.