User Tools

Site Tools


papers:gol:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
papers:gol:start [2017/06/27 19:40]
martin
papers:gol:start [2017/07/12 16:37] (current)
martin
Line 1: Line 1:
 ====== Learning intermediate goals for human problem solving ====== ====== Learning intermediate goals for human problem solving ======
  
-Learning problem solving knowledge is a technique often used to improve efficiency of problem solvers in several areas, such as in behavior cloning, planning, or reinforcement learning. In this paper, we focus on learning problem solving knowledge that can be understood by a human. Such knowledge explains computer problem solving and can be used by a human to solve the same problems without a computer. We describe an algorithm for learning strategies, where  +Learning problem solving knowledge is a technique often used to improve efficiency of problem solvers in areas, such as behavior cloning, planning, or game playing. In this paper, we focus on learning problem solving knowledge that explains computer problem solving and can be used by a human to solve the same problems without a computer. We describe an algorithm for learning strategies, where a strategy is a sequence of subgoals. Each subgoal is a 
-a strategy is a sequence of subgoals. Each subgoal is a +
 prerequisite for the next goal in the sequence, such that achieving one goal  prerequisite for the next goal in the sequence, such that achieving one goal 
-enables us to achieve the next goal with a limited amount of search. The sequence of subgoals concludes  +enables us to achieve the next goal with a limited amount of search.  Strategies are learned from a state-space representation of the domain and a set of attributes used to define subgoals. We first demonstrate the algorithm on a simple domain of solving mathematical equationswhere we use the complete state-space to learn strategies. In the other two domains, the 8-puzzle and Prolog programming, we introduce an extension of the algorithm that can learn  
-with the main goal. The strategies are learned from a state-space representation of the domain and a set of attributes used to define subgoals. We demonstrate the algorithm on three  +from a subspace of states determined by example solutions.
-domains. In the simplest one, that is learning strategies for mathematical equation  +
-solving, we learn strategies from a complete state-space representation, where  +
-each state corresponds to a learning example. In the other two examples, the  +
-8-puzzle and Prolog programming, the complete state-space is too large to be  +
-used in learning, and we introduce an extension of the algorithm that can learn  +
-from particular solution paths, can exploit implicit conditions and uses active  +
-learning to select states that are expected to have the most influence on the  +
-learned strategies.+
  
 Paper submitted to journal. Paper submitted to journal.
  
  
-  * [[https://github.com/martinmozina/orange3-gol |Source code of Orange 3 add-on (new version, but without active learning)]] +  * [[https://github.com/martinmozina/orange3-gol |Source code of Orange 3 add-on (new version in Python, but without active learning)]] 
-  * [[https://github.com/martinmozina/orangol |Source code of Orange 2 add-on (with active learning)]]+  * [[https://github.com/martinmozina/orangol |Source code of Orange 2 add-on (in C++, with active learning)]]
  
  
papers/gol/start.1498585253.txt.gz · Last modified: 2017/06/27 19:40 by martin