Abstract
Item response theory (IRT) is a popular approach for addressing large-scale assessment problems in psychometrics and other areas of applied research. An emergent research direction that integrates it with machine learning techniques has made IRT applicable to a wide range of fields. The fully Bayesian approach for estimating IRT models is computationally expensive due to the large number of iterations, which require a large amount of memory to store massive amount of data. This limits the use of the procedure in many applications using traditional CPU architecture. In an effort to overcome such restrictions, previous studies focused on utilizing high performance computing using either distributed memory-based Message Passing Interface (MPI) or massive threads compute unified device architecture (CUDA) to achieve certain speedups with a simple IRT model. This study focuses on this model and aims at demonstrating the scalability of parallel algorithms integrating CUDA into MPI computing paradigm.
Original language | English |
---|---|
Journal | Cluster Computing |
DOIs | |
State | Accepted/In press - 2023 |
Keywords
- Bayesian estimation
- GPU
- High performance computing
- Item response theory
- MCMC
- Two-parameter IRT model