summaryrefslogtreecommitdiff
path: root/aied2017/introduction.tex
blob: 44fc8dfd37a79e89378d867e999c1da89a555062 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
\section{Introduction}

% why automatic feedback
Programming education is becoming increasingly accessible with massive online courses. Since thousands of students can attend such courses, it is impossible for teachers to individually evaluate each participant’s work. On the other hand, in-time feedback that directly addresses students’ mistakes can aid the learning process. Providing feedback automatically could thus greatly enhance these courses.

% ITS background
Traditional programming tutors use manually constructed domain models to generate feedback. Model-tracing tutors simulate the problem-solving \emph{process}: how students write programs. This is challenging because there are no well-defined steps when programming (as there are for example in chess). Many tutors instead only analyze individual programs submitted by students, and disregard how a program evolved. They often use models coded in terms of constraints or bug libraries~\cite{keuning2016towards}.

% data-driven domain modeling
Developing the domain model requires significant knowledge-engineering effort~\cite{folsom-kovarik2010plan}. This is particularly true for programming tutors, where most problems have several alternative solutions with many possible implementations~\cite{le2013operationalizing}. Data-driven tutors reduce the necessary effort by mining educational data -- often from online courses -- to learn common errors and generate feedback~\cite{rivers2015data-driven,nguyen2014codewebs,jin2012program}.

% problem statement
This paper addresses the problem of finding useful features to support data mining in programming tutors. Features should be robust against superficial or irrelevant variations in program code, and relatable to the knowledge components of the target skill (programming), so as to support hint generation.

% our approach: patterns + rules
We describe features with \emph{patterns} that encode relations between variables in a program’s abstract syntax tree (AST). Patterns describe paths between certain “interesting” leaf nodes. By omitting some nodes on these paths, patterns match different programs containing the same relation. We then induce rules to predict program correctness based on AST patterns, allowing us to generate hints based on missing or incorrect patterns.

% evaluation
We evaluated our approach on existing Prolog programs submitted by students during past lab sessions of a second-year university course. Rule classification accuracy ranged between 85\% and 99\% for most problems. For 75\% of incorrect submissions we are able to suggest potentially useful patterns -- those that the student had actually implemented in the final, correct program.

% contributions
The main contributions presented in this paper are: AST patterns as features for machine learning, a rule-based model for predicting program correctness, and hints generated from incorrect or missing patterns in student programs.

%%% Local Variables:
%%% mode: latex
%%% TeX-master: "aied2017"
%%% End: