1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
|
\section{Method}
This section explains the three steps in our approach: discovering AST patterns, learning classification rules for correct and incorrect programs, and using those rules to generate hints.
\subsection{Extracting patterns}
\label{sec:extracting-patterns}
We extract patterns from student submissions. As described above, we are only interested in patterns connecting pairs of leaf nodes in an AST: either two nodes referring to the same variable (like the examples in Fig.~\ref{fig:sister}), or a value (such as the empty list \code{[]} or the number \code{0}) and another variable/value occurring within the same \textsf{compound} or \textsf{binop} (like the blue dotted pattern in Fig.~\ref{fig:sum}).
We induce patterns from such node pairs. Given the clause (the second occurrence of each variable -- \code{A}, \code{B} and \code{C} -- is marked with ’ for disambiguation)
\begin{Verbatim}
a(A,\textsf{ }B):-
b(A\textsf{'},\textsf{ }C),
B\textsf{'} is C\textsf{'}\textsf{ }+\textsf{ }1.
\end{Verbatim}
\noindent
we select the following pairs of nodes: \{\code{A},\,\code{A\textsf{'}}\}, \{\code{B},\,\code{B\textsf{'}}\}, \{\code{C},\,\code{C\textsf{'}}\}, \{\code{B\textsf{'}},\,\code{1}\} and \{\code{C\textsf{'}},\,\code{1}\}.
For each selected pair of leaf nodes $(a,b)$ we construct a pattern by walking the AST in depth-first order and recording nodes that lie on the paths to $a$ and $b$. We omit \textsf{and} nodes, as explained in the previous section. We also include certain nodes that lie near the paths to selected leaves. Specifically, we include the functor/operator of all \textsf{compound}, \textsf{binop} and \textsf{unop} nodes containing $a$ or $b$.
Patterns are extracted automatically given above constraints (each pattern connecting a pair of variables or values). We find that such patterns work well for Prolog. Other languages, however, will likely require different kinds of patterns to achieve good performance.
In order to avoid inducing rules specific to a particular program (covering typos and other idiosyncratic mistakes), we ignore rare patterns. In this study we used patterns that occurred in at least five submissions. These patterns form the feature space for rule learning.
\subsection{Learning rules}
We represent students’ programs in the feature space of AST patterns described above. Each pattern corresponds to one binary feature with value \textsf{true} when the pattern is present and \textsf{false} when it is absent. We classify each program as correct if it passes a predefined set of test cases, and incorrect otherwise. We use these labels for machine learning.
Since we can already establish program correctness using appropriate tests cases, our goal here is not classifying new submissions. Instead, we wish to discover patterns associated with correct and incorrect programs. This approach to machine learning is called \emph{descriptive induction} -- the automatic discovery of patterns describing regularities in data. We use rule learning for this task, because rule conditions can be easily translated to hints.
Before explaining the algorithm, let us discuss the reasons why a program can be incorrect. Our experience indicates that bugs in student programs can often be described by 1) some incorrect or \emph{buggy} pattern, which needs to be removed, or 2) some missing relation (pattern) between objects that should be included before the program can be correct. We shall now explain how both types of errors can be identified with rules.
To discover buggy patterns, the algorithm first learns \emph{negative rules} that describe incorrect programs. We use a variant of the CN2 algorithm~\cite{clark1991rule} implemented within the Orange data-mining toolbox~\cite{demsar2013orange}. Since we use rules to generate hints, and since hints should not be presented to students unless they are likely to be correct, we impose additional constraints on the rule learner:
\begin{itemize}
\item classification accuracy of each learned rule must exceed a threshold (we selected 90\%, as 10\% error seems acceptable for our application);
\item each conjunct in a condition must be significant with respect to the likelihood-ratio test (in our experiments we set significance threshold to $p=0.05$);
\item a conjunct can only specify the presence of a pattern (in other words, we only allow feature-value pairs with the value \textsf{true}).
\end{itemize}
The first two constraints ensure good rules with only significant patterns, while the last constraint ensures rules only mention the presence (and not absence) of patterns as reasons for a program to be incorrect. This is important, since conditions in negative rules should contain patterns symptomatic of incorrect programs.
With respect to the second type of error, we could try the same approach and use the above algorithm to learn \emph{positive rules} for the class of correct programs. The conditional part of positive rules should define sufficient combinations of patterns that render a program correct.
It turns out that it is difficult to learn accurate positive rules, because there are many programs that are incorrect despite having all important patterns, because they also include incorrect patterns.
A possible way to solve this problem is to remove programs that are covered by some negative rule. This way all known buggy patterns are removed from the data, and will not be included in positive rules. However, removing incorrect patterns also removes the need for specifying relevant patterns in positive rules. For example, if all incorrect programs were removed, the single rule “$\mathsf{true} ⇒ \mathsf{correct}$” would suffice, which cannot be used to generate hints. We achieved the best results by learning positive rules from the complete data set, but estimating their accuracy only on programs not covered by some negative rule.
While our main interest is discovering important patterns, induced rules can still be used to classify new programs, for example to evaluate rule quality. Classification proceeds in three steps: 1) if a negative rule covers the program, classify it as incorrect; 2) else if a positive rule covers the program, classify it as correct; 3) otherwise, if no rule covers the program, classify it as incorrect.
We note that Prolog clauses can often be written in various ways. For example, the clause “\code{sum([],0).}” can also be written as
\begin{Verbatim}
sum(List,Sum):- List = [], Sum = 0.
\end{Verbatim}
\noindent
Our method covers such variations by including additional patterns and rules. Another option would be to use rules in conjunction with program canonicalization, by transforming each submission into a semantically equivalent normalized form before extracting patterns~\cite{rivers2015data-driven}.
\subsection{Generating hints}
Once we have induced the rules for a given problem, we can use them to provide hints based on buggy or missing patterns. To generate a hint for an incorrect program, each rule is considered in turn. We consider two types of feedback: \emph{buggy hints} based on negative rules, and \emph{intent hints} based on positive rules.
First, all negative rules are checked to find any known incorrect patterns in the program. To find the most likely incorrect patterns, the rules are considered in the order of decreasing quality. If all patterns in the rule “$p_1 ∧ ⋯ ∧ p_k ⇒ \mathsf{incorrect}$” match, we highlight the corresponding leaf nodes. As a side note, we found that most negative rules are based on the presence of a single pattern. For the incorrect \code{sum} program from the previous section, our method produces the following highlight
\begin{Verbatim}
sum([],0). % \textit{base case:} the empty list sums to zero
sum([H|T],\red{\underline{Sum}}):- % \textit{recursive case:}
sum(T,\red{\underline{Sum}}), % sum the tail and
Sum is Sum + H. % add first element (\textit{bug:} reused variable)
\end{Verbatim}
\noindent
based on the rule “$p ⇒ \mathsf{incorrect}$”, where $p$ is the solid red pattern in Fig.~\ref{fig:sum}. This rule covers 36 incorrect programs, and one correct program using an unusual solution strategy.
If no negative rule matches the program, we use positive rules to determine the student’s intent. positive rules group patterns that together indicate a high likelihood that the program is correct. Each positive rule thus defines a particular “solution strategy” in terms of AST patterns. We reason that alerting the student to a missing pattern could help them complete the program without revealing the whole solution.
When generating a hint from positive rules, we consider all \emph{partially matching} rules “$p_1 ∧ ⋯ ∧ p_k ⇒ \mathsf{correct}$”, where the student’s program matches some (but not all) patterns $p_i$. For each such rule we store the number of matching patterns, and the set of missing patterns. We then return the most common missing pattern among the rules with most matching patterns.
For example, if we find the following missing pattern for an incorrect program implementing the \code{sister} predicate:
\begin{Verbatim}[fontfamily=sf]
(clause (head (compound (functor ‘\code{sister}’) (args var))) (binop var ‘\code{\textbackslash{}=}’))\textrm{,}
\end{Verbatim}
\noindent
we could display a message to the student saying “comparison between \code{X} and some other value is missing”, or “your program is missing the goal \code{X} \code{\textbackslash{}=} \code{?}”.
This method can find several missing patterns for a given partial program. In such cases we return the most commonly occurring pattern as the main hint, and other candidate patterns as alternative hints. We use main and alternative intent hints to establish the upper and lower bounds when evaluating hints.
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "aied2017"
%%% End:
|