summaryrefslogtreecommitdiff
path: root/aied2017/aied2017.tex
diff options
context:
space:
mode:
Diffstat (limited to 'aied2017/aied2017.tex')
-rw-r--r--aied2017/aied2017.tex6
1 files changed, 3 insertions, 3 deletions
diff --git a/aied2017/aied2017.tex b/aied2017/aied2017.tex
index c8a6e9c..a253632 100644
--- a/aied2017/aied2017.tex
+++ b/aied2017/aied2017.tex
@@ -38,13 +38,13 @@
\begin{abstract}
% motivation
-When implementing a programming tutor, it is often difficult to consider all possible errors encountered by students. A possible alternative is to automatically learn a bug library of erroneous patterns from students’ programs.
+When implementing a programming tutor, it is often difficult to manually consider all possible errors encountered by students. An alternative is to automatically learn a bug library of erroneous patterns from students’ programs.
% learning
-We propose using abstract-syntax-tree patterns as features for learning rules to distinguish between correct and incorrect programs. These rules can be used for debugging student programs: rules for incorrect programs (buggy rules) contain patterns indicating mistakes, whereas each rule for correct programs covers a subset of submissions sharing the same solution strategy.
+We propose using abstract-syntax-tree (AST) patterns as features for learning rules to distinguish between correct and incorrect programs. These rules can be used for debugging student programs: rules for incorrect programs (buggy rules) contain patterns indicating mistakes, whereas each rule for correct programs covers a subset of submissions sharing the same solution strategy.
% generating hints
To generate hints, we first check all buggy rules and point out incorrect patterns. If no buggy rule matches, rules for correct programs are used to recognize the student’s intent and suggest patterns that still need to be implemented.
% evaluation
-We evaluated our approach on past student programming data for a number of Prolog problems. For many problems, the induced rules correctly classified over 90\% of programs based only on their structural features. For approximately 75\% of incorrect submissions we were able to generate hints that were implemented by the student in some subsequent submission.
+We evaluated our approach on past student programming data for a number of Prolog problems. For 31 out of 44 problems, the induced rules correctly classified over 85\% of programs based only on their structural features. For approximately 73\% of incorrect submissions, we were able to generate hints that were implemented by the student in some subsequent submission.
\\\\
\textbf{Keywords:} Programming tutors · Error diagnosis · Hint generation · Abstract syntax tree · Syntactic features
\end{abstract}