About LearnLoop

Helping educators design rubrics, give clear feedback, and ensure fair grading.

Michel Haring (left) and Luc Mahieu (right) after a successful LearnLoop pilot

Michel Haring (left) and Luc Mahieu (right) after a successful LearnLoop pilot

LearnLoop began with a simple observation: instructors devote considerable time to assessment. This observation aligns with findings from De Jonge Akademie, which identified the growing workload in education. In response, the Dutch Ministry of Education launched the "een Slimmer Collegejaar" initiative, often translated as "Smarter Academic Year", within which LearnLoop originated at the University of Amsterdam.

Empirical motivation

The idea to use AI for assessment emerged from conversations with educators. Across these conversations, instructors consistently indicated a preference for tools that save time rather than add tasks. The majority of instructors reported that assessment was the most time-consuming part of their work.

Research focus

To shape the software, research examines how AI can be used for assessment. This study consists of a quantitative phase evaluating consistency and accuracy, and an ongoing qualitative phase investigating accuracy within human-in-the-loop workflows, potential time reduction, and user experience.

Collaborations

Research activities are conducted in collaboration with educational institutions across Europe and the United States. This international scope allows comparison across disciplines and contexts, strengthening the validity and generalizability of findings.

Human-in-the-loop

The position adopted by LearnLoop is that AI should not replace instructors' expertise. AI can be used to support expert judgment, to improve grading consistency and reduce workload. To achieve this, the workflows of 32 instructors were examined and mapped, informing the current system design.