Automated Essay Scoring (AES) plays a crucial role in educational assessment by providing scalable and consistent evaluations of writing tasks. However, traditional AES systems face three major challenges: ❶ reliance on handcrafted features that limit generalizability, ❷ difficulty in capturing fine-grained traits like coherence and argumentation, and ❸ inability to handle multimodal contexts. In the era of Multimodal Large Language Models (MLLMs), we propose ESSAYJUDGE, the first multimodal benchmark to evaluate AES capabilities across lexical-, sentence-, and discourse-level traits. By leveraging MLLMs’ strengths in trait-specific scoring and multimodal context understanding, ESSAYJUDGE aims to offer precise, context-rich evaluations without manual feature engineering, addressing longstanding AES limitations. Our experiments with 18 representative MLLMs reveal gaps in AES performance compared to human evaluation, particularly in discourse-level traits, highlighting the need for further advancements in MLLM-based AES research. The dataset and code are available at this URL.
Our contributions can be summarized as follows:
Comparison of task settings between the previous evaluation paradigm (a) and our proposed ESSAYJUDGE benchmark (b) on automated essay scoring task.
Overview of EssayJudge.
The statistics of
EssayJudge dataset.
Performance comparison of MLLMs across traits
Closed-source MLLMs demonstrate significant superiority over open-source MLLMs in essay scoring tasks, with GPT-4o achieving the strongest overall performance.
Closed-source MLLMs exhibit distinct scoring patterns compared to open-source counterparts, characterized by greater score variability and stricter adherence to rubrics.
Multimodal inputs play a critical role in improving the performance of LLMs in AES tasks.
Closed-source MLLMs perform poorly in evaluating Argument Clarity and Essay Length while excelling at lexical-level assessment.
MLLMs demonstrate outstanding performance in assessing Coherence when grading essays related to line charts.
Most closed-source MLLMs perform better in evaluating essays with single-image setting.
For Justifying Persuasiveness, most MLLMs perform better when evaluating essays with multi-image setting.
@misc{su2025essayjudgemultigranularbenchmarkassessing,
title={EssayJudge: A Multi-Granular Benchmark for Assessing Automated Essay Scoring Capabilities of Multimodal Large Language Models},
author={Jiamin Su and Yibo Yan and Fangteng Fu and Han Zhang and Jingheng Ye and Xiang Liu and Jiahao Huo and Huiyu Zhou and Xuming Hu},
year={2025},
eprint={2502.11916},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.11916},
}