priMA – Modelling of response processes depending on individual characteristics and task properties

The project deals with content-related, methodological, and diagnostic challenges in the use of process data, and investigates how individual response processes can be represented and appropriately statistically modelled.

In recent years, international comparative studies of education have increasingly switched to computer-based assessments. This opens up perspectives for improving the performance of an assessment, or the interpretability of its data (e.g. through adaptive testing or the identification of unmotivated test behaviour). As such, behavioural data derived from log data that informs about the course of a test session, known as process data, can be used to exploit such possibilities. Log data includes interactions of the test person with the assessment system, for example, in the form of mouse clicks and keyboard strokes as well as associated time information.

Process data reflects people’s behaviour while undertaking a task. It is therefore assumed that such data allows conclusions to be drawn about underlying cognitive and motivational processing. Thus, the examination of the course of processing by means of process data provides traces for construct-relevant processes, successful solution strategies, or even difficulties that a person has experienced during the processing of a task. Of particular interest are those process indicators that can predict the success of a task and thus, in the case of an incorrect answer, provide information on why someone was unable to solve a task. A deeper understanding of task response processes can provide important information for positively influencing learning processes (e.g. for individualising instructions and feedback that promotes learning).

Against this background, priMA investigated content-related, methodological and diagnostic challenges in the use and modelling of process data, pursuing two overarching research questions:

  1. How can response processes be represented in the form of valid process indicators? The fundamental question is how log data can be summarized in such a way that it can be interpreted as indicators of certain cognitive processes that are not directly observable. In doing so, it must be clarified to what extent a process indicator allows ambiguous interpretations (e.g. does a short processing time indicate that a person has worked very efficiently, or that he or she was unmotivated?). To gain a better understanding of how answers in questionnaires and tests are generated, differences between persons and task characteristics were also investigated (e.g. do very short processing times occur particularly among high-performing persons and in very easy tasks?)
  2. How can response processes be considered in competence modelling? The main methodological objective was the appropriate modelling of process indicators in statistical measurement and explanatory models, together with traditional response data. One of the objectives was to represent the speed of task processing as a latent person characteristic. The focus here was primarily on the modelling of responses and response times in cognitive performance tests. In addition, models were also investigated in which components of the response process are represented that are not primarily relevant to the construct, but which can significantly influence the reliability of the competence assessment (e.g. was a person motivated enough to take the test in such a way that his/her knowledge and ability are actually measured).

In order to work on the research questions, the project team mainly used available data from PISA and PIAAC.

Selected Publications

Goldhammer, F., Hahnel, C., & Kroehne, U. (in press). Analyzing log file data from PIAAC. In D. B. Maehler & B. Rammstedt (Hrsg.), Large-Scale Cognitive Assessment: Analysing PIAAC Data. Springer International Publishing.

Goldhammer, F., Martens, T., & Lüdtke, O. (2017). Conditioning factors of test-taking engagement in PIAAC: an exploratory IRT modelling approach considering person and item characteristics. Large-scale Assessments in Education, 5, 1–25. https://doi.org/10.1186/s40536-017-0051-9

Goldhammer, F., & Zehner, F. (2017). What to Make Of and How to Interpret Process Data. Measurement: Interdisciplinary Research and Perspectives, 15(3–4), 128–132. https://doi.org/10.1080/15366367.2017.1411651

Hahnel, C., Kroehne, U., Goldhammer, F., Schoor, C., Mahlow, N., & Artelt, C. (2019). Validating process variables of sourcing in an assessment of multiple document comprehension. British Journal of Educational Psychology, 89, 524–537. https://doi.org/10.1111/bjep.12278

Kroehne, U., & Goldhammer, F. (2018). How to conceptualize, represent, and analyze log data from technology-based assessments? A generic framework and an application to questionnaire items. Behaviormetrika, 45(2), 527–563. https://doi.org/10.1007/s41237-018-0063-y

 

Funding: Centre for International Student Assessment (ZIB)

Project Partners: IPN, TUM

Duration: 2017 - 2019

Status: Completed

Contact: Carolin Hahnel