summaryrefslogtreecommitdiff
path: root/aied2017/method.tex
blob: e70d18ae1e1f2e2722f0175df70b059e69b01dc5 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
\section{Method}

The following subsections explain the three main components of our approach: extracting patterns from student submissions, learning classification rules for correct and incorrect programs, and using those rules to generate hints.

\subsection{Extracting patterns}
\label{sec:extracting-patterns}

We extract patterns from student programs by selecting certain subsets of leaves in a program’s AST, and building up patterns that match nodes in those subsets. For this paper we always select pairs of nodes from the same clause: either two nodes referring to the same variable (like the examples above), or a value (such as \code{0} or the empty list \code{[]}) and another variable or value that occurrs in the same \code{compound} or \code{binop}. For example, in the clause\footnote{Occurrences of the three variables \code{A}, \code{B} and \code{C} are subscripted for disambiguation.}

\begin{Verbatim}
a(A\textsubscript{1},B\textsubscript{1}):-
  b(A\textsubscript{2},C\textsubscript{1}),
  B\textsubscript{2} is C\textsubscript{2} + 18.
\end{Verbatim}

\noindent
we would select the following sets of leaf nodes: \{\code{A\textsubscript{1}},\code{A\textsubscript{2}}\}, \{\code{B\textsubscript{1}},\code{B\textsubscript{2}}\}, \{\code{C\textsubscript{1}},\code{C\textsubscript{2}}\}, \{\code{B\textsubscript{2}},\code{18}\}, and \{\code{C\textsubscript{2}},\code{18}\}.

We build a pattern for each set $S$ of selected leaf nodes by walking the AST in depth-first order, and recording nodes that lie on paths to elements of $S$. As explained above, we omit \code{and} nodes, allowing the pattern to generalize to more programs. Patterns also include certain nodes that do not lie on a path to any selected leaf. Specifically, for each included \code{compound} node we also include the corresponding \code{functor} with the predicate name. We also include the operator names (like \code{+} and \code{is}) for all unary and binary (\code{binop}) nodes in the pattern.

Patterns constructed in this way form the set of features for rule learning. To keep this set at a reasonable size, we only use patterns that have appeared in programs submitted by at least five students.

\subsection{Learning rules for correct and incorrect programs}
\begin{figure}[t]
\centering
 \begin{enumerate}
  \item Let $P$ be the data of all student programs, each described with a set of AST patterns 
  and classified as correct (it passes unit tests) or incorrect. 
  \item Let method $learn\_rules(target, P, P_1, sig, acc)$ be a method that learns 
  a set of rules for class $target$ from data $P$. The method needs to consider 
  two additional constraints: the significance of every attribute-value pair
  in the condition part of the rule needs to be significant with respect to the likelihood-ratio
  test ($p<sig$) and classification accuracy of each rule on data $P_1$ should be at least $acc$. 
  \item Let $I-rules = learn\_rules(incorrect, P, P, 0.05, 0.9)$
  \item Let $P_c$ be data $P$ without programs that are already covered by $I-rules$
  \item Let $C-rules = learn\_rules(correct, P, P_c, 0.05, 0.9)$
  \item Return $I-rules$ and $C-rules$
 \end{enumerate}
 \caption{An outline of the algorithm for learning rules. The method $learn\_rules$,
 which induces rules  for a specific class, is a variant of the 
 CN2 algorithm~\cite{YYY} implemented within the Orange data-mining suite~\cite{XXX}. 
 In all our experiments, $sig$ was set to 0.05 and $acc$ was set to 0.9. }
 \label{figure:algorithm}
\end{figure}

As described in the previous section, the feature space contains all frequent AST patterns. 
The submitted programs are then represented in this feature space and classified as either
correct, if they pass all prespecified tests, or incorrect otherwise. Such data 
serves as the data set for machine learning. 

However, since we are always able to validate a program with unit tests,
the goal of machine learning is not to classify new programs, but to discover patterns
that are correlated with correctness of programs. This approach of machine learning is 
referred to as descriptive induction, that is the automatic discovery of patterns describing 
regularities in data. We use rule learning for this task since rule-based models are easy to comprehend. 

Before we can explain the algorithm, we need to discuss the reasons why a program
can be incorrect. From our previous pedagogical experience, a student program is 
incorrect 1) if some incorrect pattern is present, which needs to be removed, or 2) if the 
program lacks a certain relation (pattern) that should be included before the program can be correct. 
We shall now explain how both types of errors can be identified with rules. 

To discover patterns related to the first point, the algorithm first learns rules that describe
incorrect programs. The conditions of these rules contain frequent patterns that symptom 
incorrect programs. Since rules are used to generate hints and since hints should not be 
presented to students unless they are probably correct, we require that each learned rule's 
classification accuracy exceeds a certain threshould  (in our experiments we used 90\%), 
each conjunct in a condition is significant with respect to the likelihood-ratio test (with $p=0.05$ in our experiments),
and a conjunct can only specify the presence of a pattern. The former two constraints are needed 
to induce good rules with significant patterns, while the latter constraint assures that rules mention
only presence (and not absence) of patterns as reasons for a program to be incorrect. 

With respect to the second type of error, we could try the same approach and learn rules for the class of 
correct programs. Having accurate rules for correct programs, the conditional part of these rules would
define sufficient groups of patterns that render a program correct. However, it turns out that 
it is difficult to learn accurate rules for correct programs, since these rules should contain all relevant patterns 
and prevent incorrect patterns, yet a conjunct can only 
specify the presence of a pattern. If specifying absence of patterns was allowed in rules' condition, 
the learning problem would still be difficult, since usually there are many incorrect patterns. 
A possible way to solve this problem is to learn from data set not containing programs that are covered by rules for incorrect class. 
This way all known incorrect patterns are removed from the data and are not needed anymore in conditions of rules. 
However, removing incorrect patterns also removes the need for relevant patterns. For example, if all incorrect programs were removed, 
a rule ``IF True then class=correct'' would suffice. 
Such rule does not contain any relevant patterns and could not be used to generate hints. 
We achieved the best results by learning from both data sets. The original data set (with all programs)
is used to learn rules, while the filtered data are used to test whether a rule achieves the 
required classification accuracy (90\%). 

Figure~\ref{figure:algorithm} contains an outline of the algorithm. The rules describing
incorrect programs are called $I-rules$ and the rules for correct programs are called $C-rules$. 

Even though our main interest is discovery of patterns, we could still use induced rules to classify
new programs, for example to evaluate the quality of rules. The classification procedure has three steps: 
first check whether a $I-rule$ covers the program that needs to be classified, and if it 
does, classify it as incorrect. Then, check whether a $C-rule$ covers the program and classify 
it as correct if it does. Otherwise, if none of the induced rules cover this program, classify it as 
incorrect. 

\subsection{Generating hints}

Once we have induced the rules for a given problem, we can use them to provide hints based on buggy or missing patterns. To generate a hint for an incorrect program, each rule is considered in turn.

First, all I-rules are checked to find any known incorrect patterns in the program. To find the most likely incorrect patterns, the rules are considered in the order of decreasing quality. If all patterns in the rule “$p_1 ∧ ⋯ ∧ p_k ⇒ \mathsf{incorrect}$” match, we highlight the relevant leaf nodes. As an aside, we found that most I-rules are based on a single pattern. For the incorrect \code{sum} program from the previous section, our method produces the following highlight:

\begin{Verbatim}
sum([],0).          % \textit{base case:} the empty list sums to zero
sum([H|T],\red{\underline{Sum}}):-    % \textit{recursive case:}
  sum(T,\red{\underline{Sum}}),       %  sum the tail and
  Sum is Sum + H.   %  add first element (\textit{bug:} reused variable)
\end{Verbatim}

If no I-rule matches the program, we use C-rules to determine the student’s intent. C-rules group patterns that together indicate a high likelihood that the program is correct. Each C-rule thus defines a particular “solution strategy” defined in terms of AST patterns. We reason that a hint alerting the student to a missing pattern could help them complete the program without revealing the whole solution.

When generating a hint from C-rules, we consider all \emph{partially} matching rules “$p_1 ∧ ⋯ ∧ p_k ⇒ \mathsf{correct}$”, where the student’s program matches some (but not all) patterns $p_i$. For each such rule we store the number of matching patterns, and the set of missing patterns. We then return the most common missing pattern among the rules with most matching patterns.

For example, if we find the following missing pattern for an incorrect program implementing the \code{sister} predicate:

\begin{Verbatim}[fontfamily=sf]
(clause (head (compound (functor "\code{sister}") (args var))) (binop var "\code{\textbackslash{}=}"))\textrm{,}
\end{Verbatim}

\noindent
we could display a message to the student saying “you are missing a comparison between \code{X} and some other value, with the form \code{X} \code{\textbackslash{}=} \code{?}”.


%%% Local Variables:
%%% mode: latex
%%% TeX-master: "aied2017"
%%% End: