An Experimental Evaluation of Tools for Grading Concurrent Programming Exercises - Formal Techniques for Distributed Objects, Components, and Systems
Conference Papers Year : 2023

An Experimental Evaluation of Tools for Grading Concurrent Programming Exercises

Abstract

Automatic grading based on unit tests is a key feature of massive open online courses (MOOC) on programming, as it allows instant feedback to students and enables courses to scale up. This technique works well for sequential programs, by checking outputs against a sample of inputs, but unfortunately it is not adequate for detecting races and deadlocks, which precludes its use for concurrent programming, a key subject in parallel and distributed computing courses. In this paper we provide a hands-on evaluation of verification and testing tools for concurrent programs, collecting a precise set of requirements, and describing to what extent they can or can not be used for this purpose. Our conclusion is that automatic grading of concurrent programming exercises remains an open challenge.
Embargoed file
Embargoed file
1 2 4
Year Month Jours
Avant la publication
Thursday, January 1, 2026
Embargoed file
Thursday, January 1, 2026
Please log in to request access to the document

Dates and versions

hal-04731926 , version 1 (11-10-2024)

Licence

Identifiers

Cite

Manuel Barros, Maria Ramos, Alexandre Gomes, Alcino Cunha, José Pereira, et al.. An Experimental Evaluation of Tools for Grading Concurrent Programming Exercises. 43th International Conference on Formal Techniques for Distributed Objects, Components, and Systems (FORTE), Jun 2023, Lisbon, Portugal. pp.3-20, ⟨10.1007/978-3-031-35355-0_1⟩. ⟨hal-04731926⟩
0 View
0 Download

Altmetric

Share

More