Completed Project Phase 2018-2022
The superordinate aim of NEPS-TBT was the operation of scientifically grounded technology-based assessments that can connect to international standards.
Five central innovation aspects contributed to this objective: (1) update of software components step by step, (2) transfer of assessment innovations (e.g. innovative item formats & increase of measurement efficiency) in panel studies, (3) cross-mode linking also for heterogeneous assessment hardware (tablets, touch entry), (4) data processing of all TBT data via log data, (5) automated software testing and quality assurance. These foci of innovation were specifically implemented in the following work packages:
- A strategy had been developed for the testing and quality assurance of study-specific TBT-modules. By means of automated testing, complete data storage checks were enabled. Automated testing was furthermore serving the quality assurance of fixed test assembly and allowed for checking adaptive test assembly.
- The development of a standardized editor enabled automated checking of codebooks and test definition for multistage tests.
- A generic, study-independent concept was developed for coding missing responses taking into account indicators from log data.
- Prerequisites for implementing psychometrically sophisticated test designs were prepared, such as adaptive algorithms. The TBA Centre thus developed an infrastructure to configure CAT algorithms for test development from R. These was tested in simulation studies which have been operatively integrated into the delivery software.
- Following the paradigm of economy, result data and log data were not processed in parallel. Instead, result data has been processed on the basis of log data. To this end, criteria had been developed for defining the completeness of log data (cf. Kroehne & Goldhammer, 2018). These developments were used for creating generic tools, to enable reproducible and transparent data processing.
- Kroehne, U. & Goldhammer, F. (2018). How to conceptualize, represent, and analyze log data from technology-based assessments? A generic framework and an application to questionnaire items. Behaviormetrika, 45(2), 527–563. https://doi.org/10.1007/s41237-018-0063-y
- Deribo, T., Goldhammer, F. & Kröhne, U. (2022). Changes in the speed-ability relation through different treatments of rapid guessing. Educational and Psychological Measurement, online first. doi: 10.1177/00131644221109490
- Deribo, T., Kröhne, U. & Goldhammer, F. (2021). Model‐based treatment of rapid guessing. Journal of Educational Measurement, 58(2), 281-303. doi: 10.1111/jedm.12290
- Kröhne, U., Deribo, T. & Goldhammer, F. (2020). Rapid guessing rates across administration mode and test setting. Psychological Test and Assessment Modeling, 62(2), 144-177. doi: 10.25656/01:23630
- Kroehne, U. & Goldhammer, F. (2018). How to conceptualize, represent, and analyze log data from technology-based assessments? A generic framework and an application to questionnaire items. Behaviormetrika, 45(2), 527-563. doi: 10.1007/s41237-018-0063-y
- Engelhardt, L., Goldhammer, F., Naumann, J., & Frey, A. (2017). Experimental validation strategies for heterogeneous computer-based assessment items. Computers in Human Behavior, 76(11), 683-692. doi: 10.1016/j.chb.2017.02.020