diff options
Diffstat (limited to 'aied2017')
-rw-r--r-- | aied2017/aied2017.tex | 15 |
1 files changed, 11 insertions, 4 deletions
diff --git a/aied2017/aied2017.tex b/aied2017/aied2017.tex index 5309c76..41bcbb7 100644 --- a/aied2017/aied2017.tex +++ b/aied2017/aied2017.tex @@ -13,15 +13,22 @@ \begin{document} -\title{Patterns for debugging student programs} -\author{TODO} +\title{Automatic extraction of AST patterns \\ for debugging student programs} +\author{Timotej Lazar, Martin Možina, Ivan Bratko} \institute{University of Ljubljana, Faculty of Computer and Information Science, Slovenia} \maketitle \begin{abstract} -We propose new program features to support mining data from student submissions in a programming tutor. We extract syntax-tree patterns from student programs, and use them as features to induce rules for predicting program correctness. Discovered rules allow us to correctly classify a large majority of submissions based only on their structural features. Rules can be used to recognize intent, and provide hints in a programming tutor by pointing out incorrect or missing patterns. Evaluating out approach on past student data, we were able to find errors in over 80\% of incorrect submissions. +% motivation +When implementing a programming tutor, it is often difficult to consider all possible errors encountered by students. A possible alternative is to automatically learn a bug library of erroneous patterns from students’ programs. +% learning +We propose using abstract-syntax-tree patterns as features for learning rules to distinguish between correct and incorrect programs. These rules can be used for debugging student programs: rules for incorrect programs (buggy rules) contain patterns indicating mistakes, whereas each rule for correct programs covers a subset of submissions sharing the same solution strategy. +% generating hints +To generate hints, we first check all buggy rules and point out incorrect patterns. If no buggy rule matches, rules for correct programs are used to recognize the student’s intent and suggest patterns that still need to be implemented. +% evaluation +We evaluated our approach on past student programming data for a number of Prolog problems. For many problems, the induced rules correctly classified over 90\% of programs based only on their structural features. For approximately 75\% of incorrect submissions we were able to generate hints that were implemented by the student in some subsequent submission. \\\\ -\textbf{Keywords:} Intelligent tutoring systems · Programming · Hint generation +\textbf{Keywords:} Programming tutors · Error diagnosis · Hint generation · Abstract syntax tree · Syntactic features \end{abstract} \input{introduction} |