I'm a PhD student in the Department of Applied Psychology and Human Development at the University of Toronto. I work under the supervision of Dr. Feng Ji in the Psychometrics and Responsible AI Lab.
My research harnesses advances in machine learning, statistics, and quantitative methodology to tackle pressing questions in psychological and educational measurement, with a particular emphasis on the convergence of psychometrics and artificial intelligence.
") does not match the recommended repository name for your site ("
").
", so that your site can be accessed directly at "http://
".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}
" in index.html
.
",
which does not match the baseurl
("
") configured in _config.yml
.
baseurl
in _config.yml
to "
".
Nanyu Luo, Yuting Han, Jinbo He, Xiaoya Zhang, Feng Ji#
Preprint 2025
PyTorch and TensorFlow are two widely adopted, modern deep learning frameworks that offer comprehensive computational libraries for developing deep learning models. In this study, we illustrate how to leverage these computational platforms to estimate a class of widely used psychometric models—dichotomous and polytomous Item Response Theory (IRT) models—along with their multidimensional extensions. Simulation studies demonstrate that the parameter estimates exhibit low mean squared error and bias. An empirical case study further illustrates how these two frameworks compare with other popular software packages in applied settings. We conclude by discussing the potential of integrating modern deep learning tools and perspectives into psychometric research.
Nanyu Luo, Yuting Han, Jinbo He, Xiaoya Zhang, Feng Ji#
Preprint 2025
PyTorch and TensorFlow are two widely adopted, modern deep learning frameworks that offer comprehensive computational libraries for developing deep learning models. In this study, we illustrate how to leverage these computational platforms to estimate a class of widely used psychometric models—dichotomous and polytomous Item Response Theory (IRT) models—along with their multidimensional extensions. Simulation studies demonstrate that the parameter estimates exhibit low mean squared error and bias. An empirical case study further illustrates how these two frameworks compare with other popular software packages in applied settings. We conclude by discussing the potential of integrating modern deep learning tools and perspectives into psychometric research.
Nanyu Luo, Feng Ji#
Preprint 2025
Advances in deep learning have enhanced parameter estimation in item factor analysis (IFA), but traditional Variational Autoencoders (VAEs) often lack sufficient expressiveness. To overcome this, we introduce Adversarial Variational Bayes (AVB), which integrates VAEs with Generative Adversarial Networks via an auxiliary discriminator to relax the standard normal inference assumption. Building on this, we propose Importance‑weighted Adversarial Variational Bayes (IWAVB), which consistently achieves higher likelihoods and comparable mean‑square errors to Importance‑weighted Autoencoders (IWAE) in both empirical and simulated studies, and excels when latent distributions are multimodal. These results indicate that IWAVB can effectively scale IFA to large‑scale and multimodal datasets, fostering deeper integration of psychometric modeling and complex data analysis.
Nanyu Luo, Feng Ji#
Preprint 2025
Advances in deep learning have enhanced parameter estimation in item factor analysis (IFA), but traditional Variational Autoencoders (VAEs) often lack sufficient expressiveness. To overcome this, we introduce Adversarial Variational Bayes (AVB), which integrates VAEs with Generative Adversarial Networks via an auxiliary discriminator to relax the standard normal inference assumption. Building on this, we propose Importance‑weighted Adversarial Variational Bayes (IWAVB), which consistently achieves higher likelihoods and comparable mean‑square errors to Importance‑weighted Autoencoders (IWAE) in both empirical and simulated studies, and excels when latent distributions are multimodal. These results indicate that IWAVB can effectively scale IFA to large‑scale and multimodal datasets, fostering deeper integration of psychometric modeling and complex data analysis.