NEPS TBT: work package Technology-Based Assessment
As a consortium partner, the DIPF | Leibniz Institute for Research and Information in Education contributes to the planning and implementation of the National Educational Panel Study (NEPS). One of the project focal points is the Technology-Based Assessment (TBT) work package.
The National Educational Panel Study (NEPS) is a multi-cohort longitudinal study that examines individual educational and skill development of more than 70,000 participants and about 50,000 contextual persons in seven starting cohorts over the entire life course. With a focus on competence measurement, the NEPS lays the data foundation for studying educational opportunities and transitions in Germany. In addition to interviews and surveys, competency tests are regularly conducted in various domains. The NEPS was funded as a project by the Federal Ministry of Education and Research (BMBF) in 2009. Since 2014, the National Educational Panel has been continued at the Leibniz Institute for Educational Trajectories (LIfBi) in Bamberg.
Project Description
The Technology-based Assessment (TBT) work package is part of the NEPS methods group and is located at the DIPF in the TBA Centre (Centre for Technology-Based Assessment). There it is under the scientific leadership of Prof. Dr. Frank Goldhammer and scientific co-leadership of Dr. Daniel Schiffner as well as the operational leadership of Dr. Lena Engelhardt. NEPS-TBT works closely with the Leibniz Institute for Educational Trajectories (LIfBi) and is concerned with innovative survey and test methods, for example, computer- and Internet-based skills testing.
Project Objectives for the phase 2023-2027
In addition to providing scientific services, NEPS-TBT also aims to conduct accompanying scientific research on currently relevant topics in NEPS. In the current project phase, this includes
Co-design and implementation of proctored vs. unproctored online surveys.
The focus is on the experimental investigation of possible future online survey formats and the effects of these formats on, for example, processing behavior or data quality. With the survey formats to be tested, new promising possibilities are to be explored in comparison to the classic one-to-one interview situation in the household. For example, people could complete the competency tests online accompanied by a virtually connected interviewer (proctored mode), or complete the competency tests independently in an online setting (unproctored mode). Indicators for potentially deviating processing behavior (e.g., prolonged inactivity, rapid guessing, etc.) are developed, read out at runtime, and appropriate prompts are designed and presented as interventions. It will be tested whether such prompts can induce behavioral adaptations. Furthermore, it will be investigated whether the different conditions lead to a valid interpretation of the outcomes comparable to the classical one-to-one setting.
Diagnostic use of process indicators, e.g., to predict panel readiness
On the basis of log data, process indicators are to be extracted that can be used for modeling competency data and, for example, serve the research-based further development of existing scaling models. Process indicators can also be used to consider aspects of data quality or missing coding, i.e., the assignment of missing values to a category of missing values.
In addition, process data will be used together with outcome and paradata, such as response times, to predict willingness to participate in follow-up surveys. This can result in risk profiles with regard to drop-out, from which implications for panel maintenance and incentivization can be derived.
Research Topics
- Investigation of different survey formats in online settings (e.g., proctoring, prompts)
- Investigation of processing behavior in online tests and effectiveness of behavioral interventions
- Predicting willingness to participate in follow-up surveys using multiple data sources, such as process indicators, outcome data, paradata
- Creation and validation of innovative item and response formats for computer-based testing
- Analysis and validation of process-related behavioral data from competency measurements
- Modeling of processing speed
Science-based Services
- Provision of the CBA ItemBuilder and a deployment software (IRTLib) for the delivery of computer-based test modules.
- Study-specific support in the form of support in the creation of test items and support in the creation of test modules
- Regular workshops as well as the development of a knowledge database to support item authors in the independent creation of computer-based test modules
- Prototypical creation of innovative and new item formats.
- Coordination of requirements for the further development of the authoring tool CBA ItemBuilder and the test application for use in NEPS
- Preparation and analysis of data sets (outcome and process data) and provision of existing tools of DIPF TBA for the analysis of the collected data
Selected Talks and Publications
Talks
- Köllich, L.; Engelhardt, L., Goldhammer, F. (2024, September): Bahnen sich Rapid Guesses im Testverlauf an? Erklärung und Vorhersage von Rapid Guesses mit Prozessdaten. Talk presented at the 53rd DGPs Congress, Vienna, September 16.-19., 2024.
- Deribo, T.; Kröhne, U.; Hahnel, C., Goldhammer, F. (2023, March): Time-on-Task from Log and Eye Movement Data: Commonalities and Differences. Talk presented at the Annual Meeting of the National Council on Measurement in Education (NCME), Virtual, Chicago, USA, March 28.-30., 2023.
- Engelhardt, L., Kroehne, U., Hahnel, C., Deribo, T., Goldhammer, F. (2021, June). Validating ability-related time components in reading tasks with unit structure. Talk presented at the NEPS Conference 2021, Virtual, June 8, 2021.
Publications
- Deribo, T., Goldhammer, F., & Kroehne, U. (2023). Changes in the Speed–Ability Relation Through Different Treatments of Rapid Guessing. Educational and Psychological Measurement, 83(3), 473-494. https://doi.org/10.1177/00131644221109490
- Goldhammer, F., Hahnel, C., Kroehne, U. & Zehner, F. (2021). From byproduct to design factor: On validating the interpretation of process indicators based on log data. Large-scale Assessments in Education, 9:20. doi: 1186/s40536-021-00113-5
- Goldhammer, F., Kroehne, U., Hahnel, C. & De Boeck, P. (2021). Controlling speed in component skills of reading improves the explanation of reading comprehension. Journal of Educational Psychology, 113(5), 861-878. doi: 1037/edu0000655
- Engelhardt, L. & Goldhammer, F. (2019). Validating test score interpretations using time information. Frontiers in Psychology, 10:1131. doi: 10.3389/fpsyg.2019.01131
- Engelhardt, L., Goldhammer, F., Naumann, J. & Frey, A. (2017). Experimental validation strategies for heterogeneous computer-based assessment items. Computers in Human Behavior, 76, 683-692. doi: 10.1016/j.chb.2017.02.020
Funding: Third-party funding
Co-operation: Leibniz Institute for Educational Trajectories (LIfBi), Leibniz Institute for Science and Mathematics Education (IPN)
Duration: 2023 - 2027
Status: ongoing
Scientific lead: Frank Goldhammer, Daniel Schiffner
Project team: Lena Engelhardt, Barbara Persch, Leo Köllich
Contact: Lena Engelhardt (operative lead)
Completed Project phases
Completed Project phase 2018-2022