Sun 15 - Sat 21 November 2020 Online Conference
Fri 20 Nov 2020 11:50 - 12:20 at SPLASH-IV - Papers

Automated assessment tools are widely used as a means for providing formative feedback to undergraduate students in computer science courses while helping those courses simultaneously scale to meet student demand. While formative feedback is a laudable goal, we have observed many students trying to debug their solutions into existence using only the feedback given, while losing context of the learning goals intended by the course staff. In this paper we detail case studies of two undergraduate courses indicating that limiting feedback to only giving nudges or hints about where students should focus their efforts in future attempts, can improve how they internalize and act on automatically provided feedback. By carefully reasoning about errors uncovered by our automated assessment approaches, we have been able to create feedback for students that helps them to revisit the learning outcomes for the assignment or course. This approach has been applied to both multiple-choice feedback in online quiz taking systems and automated assessment of student programming tasks for over 1,000 students from second and third-year software engineering courses. We have found that not only has student performance not suffered, but that the students reflect positively about how they investigate automated assessment failures.

Conference Day
Fri 20 Nov

Displayed time zone: Central Time (US & Canada) change