summaryrefslogtreecommitdiff
path: root/aied2017/method.tex
diff options
context:
space:
mode:
Diffstat (limited to 'aied2017/method.tex')
-rw-r--r--aied2017/method.tex54
1 files changed, 11 insertions, 43 deletions
diff --git a/aied2017/method.tex b/aied2017/method.tex
index e70d18a..797f625 100644
--- a/aied2017/method.tex
+++ b/aied2017/method.tex
@@ -21,6 +21,7 @@ We build a pattern for each set $S$ of selected leaf nodes by walking the AST in
Patterns constructed in this way form the set of features for rule learning. To keep this set at a reasonable size, we only use patterns that have appeared in programs submitted by at least five students.
\subsection{Learning rules for correct and incorrect programs}
+
\begin{figure}[t]
\centering
\begin{enumerate}
@@ -43,53 +44,20 @@ Patterns constructed in this way form the set of features for rule learning. To
\label{figure:algorithm}
\end{figure}
-As described in the previous section, the feature space contains all frequent AST patterns.
-The submitted programs are then represented in this feature space and classified as either
-correct, if they pass all prespecified tests, or incorrect otherwise. Such data
-serves as the data set for machine learning.
-
-However, since we are always able to validate a program with unit tests,
-the goal of machine learning is not to classify new programs, but to discover patterns
-that are correlated with correctness of programs. This approach of machine learning is
-referred to as descriptive induction, that is the automatic discovery of patterns describing
-regularities in data. We use rule learning for this task since rule-based models are easy to comprehend.
-
-Before we can explain the algorithm, we need to discuss the reasons why a program
-can be incorrect. From our previous pedagogical experience, a student program is
-incorrect 1) if some incorrect pattern is present, which needs to be removed, or 2) if the
-program lacks a certain relation (pattern) that should be included before the program can be correct.
-We shall now explain how both types of errors can be identified with rules.
-
-To discover patterns related to the first point, the algorithm first learns rules that describe
-incorrect programs. The conditions of these rules contain frequent patterns that symptom
-incorrect programs. Since rules are used to generate hints and since hints should not be
-presented to students unless they are probably correct, we require that each learned rule's
-classification accuracy exceeds a certain threshould (in our experiments we used 90\%),
-each conjunct in a condition is significant with respect to the likelihood-ratio test (with $p=0.05$ in our experiments),
-and a conjunct can only specify the presence of a pattern. The former two constraints are needed
-to induce good rules with significant patterns, while the latter constraint assures that rules mention
-only presence (and not absence) of patterns as reasons for a program to be incorrect.
-
-With respect to the second type of error, we could try the same approach and learn rules for the class of
-correct programs. Having accurate rules for correct programs, the conditional part of these rules would
-define sufficient groups of patterns that render a program correct. However, it turns out that
-it is difficult to learn accurate rules for correct programs, since these rules should contain all relevant patterns
-and prevent incorrect patterns, yet a conjunct can only
-specify the presence of a pattern. If specifying absence of patterns was allowed in rules' condition,
-the learning problem would still be difficult, since usually there are many incorrect patterns.
-A possible way to solve this problem is to learn from data set not containing programs that are covered by rules for incorrect class.
-This way all known incorrect patterns are removed from the data and are not needed anymore in conditions of rules.
-However, removing incorrect patterns also removes the need for relevant patterns. For example, if all incorrect programs were removed,
-a rule ``IF True then class=correct'' would suffice.
-Such rule does not contain any relevant patterns and could not be used to generate hints.
-We achieved the best results by learning from both data sets. The original data set (with all programs)
-is used to learn rules, while the filtered data are used to test whether a rule achieves the
-required classification accuracy (90\%).
+Submitted programs are represented in the feature space of AST patterns described above. We classify programs as correct or incorrect based on predefined unit tests for each problem, and use these labels for machine learning.
+
+Since programs can already be validated with appropriate unit tests, our goal is not classifying new submissions, but rather to discover patterns correlated with program correctness. This machine-learning approach is called \emph{descriptive induction} -- the automatic discovery of patterns describing regularities in data. We use rule learning for this task because rule-based models are easily comprehensible.
+
+Before explaining the algorithm, let us discuss the reasons why a program can be incorrect. Our teaching experience indicates that bugs in student programs can often be described by 1) some incorrect pattern, which needs to be removed, or 2) some missing relation (pattern) that should be included before the program can be correct. We shall now explain how both types of errors can be identified with rules.
+
+To discover patterns related to the first point, the algorithm first learns rules that describe incorrect programs. The conditions of these rules contain frequent patterns symptomatic of incorrect programs. Since rules are used to generate hints, and since hints should not be presented to students unless they are probably correct, we require that each learned rule's classification accuracy exceeds a certain threshold (in our experiments we used 90\%), each conjunct in a condition is significant with respect to the likelihood-ratio test (with $p=0.05$ in our experiments), and a conjunct can only specify the presence of a pattern. The former two constraints are needed to induce good rules with significant patterns, while the latter constraint assures that rules mention only presence (and not absence) of patterns as reasons for a program to be incorrect.
+
+With respect to the second type of error, we could try the same approach and learn rules for the class of correct programs. Having accurate rules for correct programs, the conditional part of these rules would define sufficient groups of patterns that render a program correct. However, it turns out that it is difficult to learn accurate rules for correct programs, since these rules should contain all relevant patterns and prevent incorrect patterns, yet a conjunct can only specify the presence of a pattern. If specifying absence of patterns was allowed in rules' condition, the learning problem would still be difficult, since usually there are many incorrect patterns. A possible way to solve this problem is to learn from data set not containing programs that are covered by rules for incorrect class. This way all known incorrect patterns are removed from the data and are not needed anymore in conditions of rules. However, removing incorrect patterns also removes the need for relevant patterns. For example, if all incorrect programs were removed, a rule “$\mathsf{true} ⇒ \mathsf{correct}$” would suffice. Such rule does not contain any relevant patterns and could not be used to generate hints. We achieved the best results by learning from both data sets. The original data set (with all programs) is used to learn rules, while the filtered data are used to test whether a rule achieves the required classification accuracy (90\%).
Figure~\ref{figure:algorithm} contains an outline of the algorithm. The rules describing
incorrect programs are called $I-rules$ and the rules for correct programs are called $C-rules$.
-Even though our main interest is discovery of patterns, we could still use induced rules to classify
+Even though our main interest is discovery of patterns, we can still use induced rules to classify
new programs, for example to evaluate the quality of rules. The classification procedure has three steps:
first check whether a $I-rule$ covers the program that needs to be classified, and if it
does, classify it as incorrect. Then, check whether a $C-rule$ covers the program and classify