Logo EssayJudge

A Multi-Granular Benchmark for Assessing Automated Essay Scoring Capabilities of Multimodal Large Language Models

Jiamin Su 1, 2,* , Yibo Yan 1, 3,* , Fangteng Fu 1, Han Zhang 1, Jingheng Ye 4, Xiang Liu 1, 3 , Jiahao Huo 1, Huiyu Zhou 5, Xuming Hu 1, 2, 3,†
1 The Hong Kong University of Science and Technology (Guangzhou), 2 Beijing Future Brain Education Technology Co., Ltd.,
3 The Hong Kong University of Science and Technology, 4 Tsinghua University, 5 Guangxi Zhuang Autonomous Region Big Data Research Institute

Abstract

Automated Essay Scoring (AES) plays a crucial role in educational assessment by providing scalable and consistent evaluations of writing tasks. However, traditional AES systems face three major challenges: ❶ reliance on handcrafted features that limit generalizability, ❷ difficulty in capturing fine-grained traits like coherence and argumentation, and ❸ inability to handle multimodal contexts. In the era of Multimodal Large Language Models (MLLMs), we propose ESSAYJUDGE, the first multimodal benchmark to evaluate AES capabilities across lexical-, sentence-, and discourse-level traits. By leveraging MLLMs’ strengths in trait-specific scoring and multimodal context understanding, ESSAYJUDGE aims to offer precise, context-rich evaluations without manual feature engineering, addressing longstanding AES limitations. Our experiments with 18 representative MLLMs reveal gaps in AES performance compared to human evaluation, particularly in discourse-level traits, highlighting the need for further advancements in MLLM-based AES research. The dataset and code are available at this URL.

Contributions

Our contributions can be summarized as follows:

  • We introduce the first multimodal AES dataset ESSAYJUDGE, comprising over 1,000 highquality multimodal English essays, each of which has undergone rigorous multi-round human annotation and verification.
  • We propose a trait-specific scoring framework that enables comprehensive evaluation with ten multi-granular metrics, covering three dimensions: lexical, sentence, and discourse levels.
  • We conduct an in-depth evaluation of 18 stateof-the-art MLLMs and human evaluation on ESSAYJUDGE, using Quadratic Weighted Kappa (QWK) as the primary metric to assess the traitspecific scoring performance.

Comparison of task settings between the previous evaluation paradigm (a) and our proposed ESSAYJUDGE benchmark (b) on automated essay scoring task.

SeePhys

Logo EssayJudge Dataset

Overview

SeePhys

Overview of EssayJudge.

Dataset statistics

SeePhys

The statistics of Logo EssayJudge dataset.

Roadmap of dataset collection, annotation, and consistent update

SeePhys

Logo Experiment Results

Performance comparison of MLLMs across traits

Experiment Results Chart

Logo Experiment Analysis

Closed-source MLLMs demonstrate significant superiority over open-source MLLMs in essay scoring tasks, with GPT-4o achieving the strongest overall performance.

Experiment Results Chart

Closed-source MLLMs exhibit distinct scoring patterns compared to open-source counterparts, characterized by greater score variability and stricter adherence to rubrics.

Experiment Results Chart

Multimodal inputs play a critical role in improving the performance of LLMs in AES tasks.

Experiment Results Chart

Closed-source MLLMs perform poorly in evaluating Argument Clarity and Essay Length while excelling at lexical-level assessment.

Experiment Results Chart

MLLMs demonstrate outstanding performance in assessing Coherence when grading essays related to line charts.

Experiment Results Chart

Most closed-source MLLMs perform better in evaluating essays with single-image setting.

Experiment Results Chart

For Justifying Persuasiveness, most MLLMs perform better when evaluating essays with multi-image setting.

Experiment Results Chart
Experiment Results Chart

BibTeX


      @misc{su2025essayjudgemultigranularbenchmarkassessing,
      title={EssayJudge: A Multi-Granular Benchmark for Assessing Automated Essay Scoring Capabilities of Multimodal Large Language Models},
      author={Jiamin Su and Yibo Yan and Fangteng Fu and Han Zhang and Jingheng Ye and Xiang Liu and Jiahao Huo and Huiyu Zhou and Xuming Hu},
      year={2025},
      eprint={2502.11916},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.11916},
      }