Detail Publikasi
Abstrak
Drawing on recent educational research and policy guidance, this article examines a rubric-grounded large language model (LLM) feedback system for K–12 education in two contrasting contexts: the United States and Uzbekistan. The study highlights how AI-powered tutors can provide timely, personalized formative feedback to students and augment teacher capacity, while also addressing challenges of reliability (avoiding “hallucinated” content), fairness and bias, privacy, and deployment in resource-constrained settings. We outline a system design and evaluation framework that integrates learning outcome measures with business intelligence analytics for continuous improvement. A comparative analysis of a U.S. pilot and considerations for an Uzbek deployment indicates that LLM-driven feedback can improve student writing outcomes and teacher efficiency in both high-resource and developing environments, though with necessary localization and training. The findings underscore the need for robust alignment with curriculum standards, human-in-the-loop oversight, and supportive policy frameworks to ensure reliable, equitable scaling of AI in education.