A comparative review of CET4 and CEFR writing assessment with insights from task complexity and SLA theories

The CET4 writing assessment is pivotal in evaluating language proficiency and informing language teaching practices in China’s higher education system. This study reviews the relationship between the CET4 writing rubrics and the Common European Framework of Reference for Languages (CEFR) level descr...

Full description

Saved in:
Bibliographic Details
Main Authors: Li, Changlin, Nik Aloesnita, Nik Mohd Alwi, Mohammad Musab, Azmat Ali
Format: Article
Language:en
Published: www.msocialsciences.com 2025
Subjects:
Online Access:https://umpir.ump.edu.my/id/eprint/43995/1/A%20comparative%20review%20of%20CET4%20and%20CEFR%20writing%20assessment%20with%20insights%20from%20task%20complexity%20and%20SLA%20theories.pdf
https://umpir.ump.edu.my/id/eprint/43995/
https://doi.org/10.37134/ajelp.vol13.1.7.2025
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The CET4 writing assessment is pivotal in evaluating language proficiency and informing language teaching practices in China’s higher education system. This study reviews the relationship between the CET4 writing rubrics and the Common European Framework of Reference for Languages (CEFR) level descriptors in essay writing, chiefly focusing on studies within ten years. Despite the widespread use of CET4 in Chinese universities, its comparison with CEFR writing level descriptors requires further investigation. This review explores three primary areas: task complexity, automated scoring systems, and second language acquisition (SLA) frameworks in essay writing. Research indicates that the majority of the CET4 writing scores are roughly equivalent to CEFR levels B1-B2, though comparison with higher proficiency levels such as C1-C2 remains inconsistent. Automated scoring systems demonstrate reliability for evaluating basic writing features but struggle with assessing aspects, such as discourse description and argument quality, which are essential to the CEFR level descriptors. Task complexity analysis shows that more advanced writing tasks correlate with higher CEFR levels, suggesting ways to refine the assessment. However, current automated scoring systems are insufficient for capturing the nuanced features of advanced writing. These findings highlight the need for manual evaluation by raters, especially for higher-level writing features, while pointing to opportunities for improving grading methods and task complexity design to enhance essay writing instruction.