User Tools

Site Tools


papers:gol:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
papers:gol:start [2017/06/16 11:02]
martin created
papers:gol:start [2017/07/12 16:37] (current)
martin
Line 1: Line 1:
 ====== Learning intermediate goals for human problem solving ====== ====== Learning intermediate goals for human problem solving ======
  
-The need for learning problem solving knowledge has been presented in several  +Learning problem solving knowledge is a technique often used to improve efficiency of problem solvers in areas, such as behavior cloning, planning, or game playing. In this paper, we focus on learning problem solving knowledge that explains computer problem solving and can be used by human to solve the same problems without a computer. We describe an algorithm for learning strategies, where a strategy is a sequence of subgoals. Each subgoal is a 
-areas, such as in behavior cloning, acquiring teaching knowledge for  +
-intelligent tutoring systems or simply to be used as heuristics in game playing.  +
-In this paper, we focus on learning understandable and easy-to-use problem solving knowledge that  +
-can be used by human problem solvers. We suggest an algorithm for learning strategies, where  +
-a strategy is a sequence of subgoals. Each subgoal is a +
 prerequisite for the next goal in the sequence, such that achieving one goal  prerequisite for the next goal in the sequence, such that achieving one goal 
-enables us to achieve the next goal with a limited amount of search. As the sequence of subgoals concludes  +enables us to achieve the next goal with a limited amount of search.  Strategies are learned from a state-space representation of the domain and a set of attributes used to define subgoals. We first demonstrate the algorithm on a simple domain of solving mathematical equationswhere we use the complete state-space to learn strategies. In the other two domains, the 8-puzzle and Prolog programming, we introduce an extension of the algorithm that can learn  
-with the main goal, such a strategy facilitates problem solving. The algorithm  +from a subspace of states determined by example solutions.
-learns several strategies from a state-space representation of  +
-the domain. Each state needs to be described with a set of attributes that are  +
-used to define subgoals, hence a subgoal can be seen as a subset of states  +
-satisfying some description. We demonstrate the algorithm on three  +
-domains. In the simplest one, learning strategies for mathematical equation  +
-solving, we learn strategies from a complete state-space representation, where  +
-each state corresponds to a learning example. In the other two examples, the  +
-8-puzzle and Prolog programming, the complete state-space is too large to be  +
-used in learning, and we introduce an extension of the algorithm that can learn  +
-from particular solution paths, can exploit implicit conditions and uses active  +
-learning to select states that are expected to have the most influence on the  +
-learned strategies.+
  
 Paper submitted to journal. Paper submitted to journal.
  
  
-  * [[https://github.com/martinmozina/orange3-gol |Source code of Orange 3 add-on (new version, but without active learning)]] +  * [[https://github.com/martinmozina/orange3-gol |Source code of Orange 3 add-on (new version in Python, but without active learning)]] 
-  * [[https://github.com/martinmozina/orangol |Source code of Orange 2 add-on (with active learning)]]+  * [[https://github.com/martinmozina/orangol |Source code of Orange 2 add-on (in C++, with active learning)]]
  
  
papers/gol/start.1497603729.txt.gz · Last modified: 2017/06/16 11:02 by martin