نوع مقاله : مقاله پژوهشی

نویسندگان

1 دکترای آموزش زبان انگلیسی، دانشگاه فناوری و علوم کاربردی، شناص، عمان

2 کارشناسی ارشد آموزش زبان انگلیسی، دانشگاه فناوری و علوم کاربردی، المصنعه، عمان

10.22049/jalda.2026.30804.1842

چکیده

این مطالعه به بررسی برداشت­ها و شیوه‌های معلمان زبان انگلیسی به عنوان زبان خارجی سطح ۲ در استفاده از روبریک‌های استاندارد ارزیابی نگارش در دانشگاه فناوری و علوم کاربردی (UTAS) در عمان می‌پردازد. از یک مصاحبه نیمه­ساختار یافته و پروتکل تفکر با صدای بلند برای بررسی نحوه تفسیر و اعمال معیار‌های روبریک توسط معلمان هنگام نمره‌دهی به بخش نوشتاری تاسک ۲ استفاده شد. از تحلیل موضوعی برای تجزیه و تحلیل داده‌های کیفی استفاده شد و نقاط قوت و محدودیت‌های روبریک فعلی، مانند توصیف‌گر‌های مبهم و عدم تطابق با انتظارات زبان‌آموز سطح پیش متوسط، آشکار شد. بر اساس بازخورد معلمان، یک روبریک اصلاح‌شده توسعه داده شد. برای ارزیابی تأثیر آن، از یک طرح درون‌گروهی استفاده شد و آزمون t با نمونه‌های جفت‌شده برای مقایسه نمرات نوشتاری دانش‌آموزان در مقایسه با دو نسخه روبریک انجام شد. نتایج نشان دهنده بهبود قابل توجه در نمرات کلی نوشتاری دانش‌آموزان به ویژه در زمینه‌های دستور زبان و واژگان بود. یافته‌ها بر ارزش طراحی روبریک آگاهانه توسط معلم، تطابق با اهداف آموزشی و تعدیل مداوم تأکید دارند. این مطالعه به افزایش پایایی ارزیابی و ارتباط آموزشی در ارزیابی نوشتاری زبان انگلیسی به عنوان زبان خارجی در مؤسسات آموزش عالی مانند UTAS در عمان کمک می‌کند.

کلیدواژه‌ها

موضوعات

Abderrahmane, D., & Mebitil, N. (2025). AI and the pedagogical shift: Adaptability strategies for Algerian EFL teachers. Logos Universality Mentality Education Novelty Social Sciences, 14(1), 1-17. https://doi.org/‌‌‌‍‌10.‌‌‌18662/‌lumenss/14.1/112
Alghizzi, T. M., & Alshahrani, T. M. (2024). Effects of grading rubrics on EFL learners' writing in an EMI setting. Heliyon10(18), e36394. https://doi.‌org/10.1016/j.heliyon.2024.e36394
Alshakhi, A. (2019). Revisiting the writing assessment process at a Saudi English language institute: Problems and solutions. English Language Teaching12(1), 176–185. https://doi.org/10.5539/elt.v12n1p176
Alderson, J. C. (2005). Diagnosing foreign language proficiency: The interface between learning and assessment. Continuum.
Al-Saadi, Z., Khalil, H., Yousef, A. M. F. (2025). Exploring Omani EFL student teachers’ perceptions on fostering critical thinking through ethical use of AI. Education Process: International Journal, 17(1), 12–25. https://doi.org/‌10.‌22521/‌edupij.2025.17.319
Andrade, H. L. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27-31. https://doi.org/10.3200/CTCH.53.1.27-31 
Andrade, H., & Du, Y. (2005). Student perspectives on rubric-referenced assessment. Practical Assessment, Research, and Evaluation, 10(3), 1–11.
Barkaoui, K. (2007). Rating scale impact on EFL essay marking: A mixed-method study. Assessing Writing, 12(2), 86–107. https://doi.org/10.1016/‌j.esp.‌2005.‌09.‌002
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706‌qp063oa  
Brookhart, S. M. (2018). How to create and use rubrics for formative assessment and grading. ASCD.
Brookhart, S. M., & Chen, F. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343–368. https://doi.org/10.1080/‌00131911.‌2014.929565  
Brown, G. T., & Harris, L. R. (2014). The future of self-assessment in classroom practice: Reframing self-assessment as a core competency. Frontline learning research, 2(1), 22–30. https://doi.org/10.14786/flr.v2i1.24
Brown, J. D., & Hudson, T. (2002). Criterion-referenced language testing. Cambridge University Press.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Routledge. https://doi.org/10.4324/9780203771587
Crusan, D., Plakans, L., & Gebril, A. (2016). Writing assessment literacy: Surveying second language teachers’ knowledge, beliefs, and practices. Assessing Writing, 28, 43-56. 
Davison, C., & Leung, C. (2009). Current issues in English language teacher‐based assessment. TESOL Quarterly, 43(3), 393–415. https://doi.org/‌‌10.‌1002/j.‌1545-7249.2009.tb00242.x  
Eckes, T. (2012). Operational rater types in writing assessment: Linking rater cognition to rater behavior. Language Assessment Quarterly, 9(3), 270–292. https://doi.org/10.1080/15434303.2011.649381
Ghanbari, N., & Barati, H. (2020). Development and validation of a rating scale for Iranian academic writing assessment: A mixed-methods study. Language Testing in Asia, 10(1), 1–22. https://doi.org/10.1186/s40468-020-00112-3
Goodwin, R. (2019). Opportunities and questions: A short report on rubric assessments in Asia and the Middle East. Arab World English Journal (AWEJ),10. https://dx.doi.org/10.24093/awej/vol10no3.2    
Fulcher, G., & Davidson, F. (2007). Language testing and assessment: An advanced resource book. Routledge.
Hamp-Lyons, L. (2007). The impact of testing practices on teaching: Ideologies and alternatives. In J. Cummins & C. Davison (Eds.), International Handbook of English Language Teaching (pp. 487–504). Springer.
Hubias, A., & Muftahu, M. (2022). Internationalization of curriculum in Omani higher education: Perceptions of academic staff in UTAS. International Journal of Higher Education, 11(5), 134-144. https://doi.org/10.5430/‌ijhe.v11n5p134 
Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130-144. https://doi.org/10.1016/j.edurev.2007.05.002
Kane, M. T. (2000). Validity as an argument in educational assessment. Measurement, 2(2-3), 31–34.
Knoch, U., & Chapelle, C. A. (2017). Validation of rating processes within an argument-based framework. Language Testing, 35(4). https://doi.org/‌10.1177/‌0265532217710049 
Larson-Hall, J., & Plonsky, L. (2015). Reporting and interpreting quantitative research findings: What gets reported and recommendations for the field. Language Learning, 65(S1), 127–159.
Le, X. M., Phuong, H. Y., Phan, T., Thao, L. T. (2024). Impact of using analytic rubrics for peer assessment on EFL students’ writing performance: An experimental study. Multicultural Education, 10(3), 41–53. https://doi.org/‌10.5281/zenodo.7750831
Lee, I. (2009). Ten mismatches between teachers’ beliefs and written feedback practice. ELT Journal, 63(1), 13-22. https://doi.org/10.1093/elt/ccn010
Jin, H. (2025). When AI meets source use: Exploring ChatGPT’s potential in L2 summary writing assessment. System, 133. https://doi.org/10.1016/j.system.‌2025.103126
Li, J., & Lindsey, P. (2015). Understanding variations between student and teacher application of rubrics. Assessing Writing, 26(5), 67-79. https://doi.‌org/10.1016/j.asw.2015.07.003
Lim, J., & Sudweeks, R. (2020). Rubric revision and its impact on rater reliability and student performance. Assessing Writing, 44, 100450.
Mosquera, L. (2017). The impact of analytic rubrics on students' writing. Profile Issues in Teachers' Professional Development, 19(1), 149–159.
Nation, I. S. P. (2001). Learning vocabulary in another language. Cambridge University Press.
Nurhayati, A. (2020). The implementation of formative assessment in EFL writing: A case study at a secondary school in Indonesia.  8 (2), 126. https://doi.‌org/10.‌32332/pedagogy.v8i2.2263
Pallant, J. (2020). SPSS survival manual: A step-by-step guide to data analysis using IBM SPSS. Routledge.
Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129-144.
Phuong, H. Y., Phan, Q. T., Thao, L. T. (2023). The effects of using analytical rubrics in peer and self-assessment on EFL students’ writing proficiency: A Vietnamese contextual study. Language Testing in Asia, 13(1). https://doi.‌org/10.1186/s40468-023-00256-y
Plonsky, L., & Oswald, F. L. (2014). How big is “big”? Interpreting effect sizes in L2 research. Language Learning, 64(4), 878–912. https://doi.org/10.1111/‌lang.12079
Popham, W. J. (2009). Assessment literacy for teachers: Faddish or fundamental? Theory Into practice, 48(1), 4–11. https://doi.org/10.1080/‌00405840802577536
Read, J. (2000). Assessing vocabulary. Cambridge University Press.
Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435-448. https://doi.org/10.1080/02602930902862859
Reynolds-Keefer, L. (2010). Rubric-referenced assessment in teacher preparation: An opportunity to learn by using. Practical Assessment, Research, and Evaluation, 15(1).  https://doi.org/10.7275/psk5-mf68
Riddle, E. J., Smith, M., & Frankforter, S. A. (2016). A rubric for evaluating student analyses of business cases.  Journal of Management Education, 40(5), 595-618. https://doi.org/10.1177/1052562916644283
Sabermoghaddam Roudsari, S., Azabdaftari, B., & Seifoori, Z. (2024). The Intervention of Criteria-Referenced Self-Assessment in Developing the Accuracy, Lexical Resource, and Coherence of Advanced Iranian EFL Learners’ Writing: Shared vs. Independent Tasks. Journal of Applied Linguistics and Applied Literature: Dynamics and Advances, 12(2), 31-58. https://doi.org/10.22049/jalda.2024.28336.1523
Sadler, D. R. (2009). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34(2), 159-179. https://doi.org/10.1080/02602930801956059   
Shohamy, E. (2001). The power of tests: A critical perspective on the uses of language tests. Pearson Education.
Weigle, S. C. (2002). Assessing writing. Cambridge University Press.
Yang, C., Zhang, L. J. (2023). Think-aloud protocols in second language writing: a mixed-methods study of their reactivity and veridicality. Springer.