Did you know that you can navigate the posts by swiping left and right?
Click here for all MoFaCTs posts
Phil Pavlik, Amanda Banker, and I are working on an extension to Phil’s previous work on MoFaCTs to go beyond fact learning towards integrated mental models based on text.
The key idea is that practicing items in a sequence based on their semantic relations is like walking a mental model and therefore strengthens connections between concepts/propositions that may be spatially distant in the text itself.
We will automatically generate such cloze practice items to remove this burden from instructors. We will also generate items that are not explicitly in the text by creating paraphrases and inferred statements. These items will further move us from reinforcing a textbase model towards a situation model.
Finally, the feedback from the system will make use of adaptive feedback with a refutational aspect to help remediate systematic errors (i.e. near misses), both in the form of elaborative feedback and in the form of refutational dialogues.
Stevens Amendment Notice: This project will be 100% financed with Federal funds at a dollar amount of $1,240,151. No non-governmental funds will be used to finance this project.