Logo: Deutsches Institut für Internationale Pädagogische Forschung

TBA English

ZIB priMA

Assessing the process of working on a task might inform about successful paths to a task solution and indicates difficulties in the solution process. The project “Modelling response processes depending on individual variables and task characteristics” (ZIB priMA) focuses on assessing how such task solving processes can be represented and modelled appropriately.

Generally speaking, competence assessment is based on investigating whether or not an individual has successfully solved a task. If tests are technology-based, e.g. in terms of computer-based assessment (CBA), such “traditional” outcome data can be supplemented by the collection of behavioural data which provide information about the course of working on a task. Such so-called process data are generated without placing any additional burden on the test-takers and they are saved in log-files. For example, these data comprise time spent on a task, steps and their sequence – deduced from mouse-click data respectively keyboard hits.

Subject to the project, content-related and methodological challenges are thus investigated concerning international assessment, e.g. PISA (Programme for International Student Assessment) and PIAAC (Programme for the International Assessment of Adult Competencies). Result and log data from computer-based competence assessments from previous and future PISA and PIAAC studies will be focused. The project will target the following overarching research questions:

1) How can we identify valid process indicators? Process data reflect individual test behaviours. Therefore, we assume the process data to permit inferences regarding underlying cognitive and motivational processes and competencies. However, questions can be raised regarding a) in how far new and significant indicators for competence assessment can be deduced from the data,  and b) how clearly these can be interpreted and c) in how far they contribute to assuring or improving the quality of existing indicators.   

2) How can response processes be modelled together with outcome data? Methodological challenges concern the joint modelling of response and process data in statistical measurement and explanation models targeting the adequate description of relationships between competencies and processes from large-scale assessments (e.g. processing times as latent factor for determining speed).

3) How do individual variables and task-specific characteristics influence the path to a solution? To gain a better understanding of how responses in questionnaires and tests are generated, we will also investigate differences between individuals and tasks which might also reflect systematic influences (e.g. change of strategies across tasks).

Contact: Carolin Hahnel

last modified Dec 20, 2017