Nanyu Luo (Ronan, 羅南煜)
Logo PhD student

I'm a PhD student in the Department of Applied Psychology and Human Development at the University of Toronto. I work under the supervision of Dr. Feng Ji in the Psychometrics and Responsible AI Lab.

My research harnesses advances in machine learning, statistics, and quantitative methodology to tackle pressing questions in psychological and educational measurement, with a particular emphasis on the convergence of psychometrics and artificial intelligence.


Education
  • University of Toronto
    University of Toronto
    Dept. of Applied Psychology and Human Development
    Ph.D. Student
    Sep. 2024 - present
  • University of Oxford
    University of Oxford
    MSc. in Statistical Science, Distinction
    Oct. 2023 - Sep. 2024
  • The Chinese University of Hong Kong, Shenzhen
    The Chinese University of Hong Kong, Shenzhen
    BSc. in Applied Mathematics, First Class
    Sep. 2019 - May 2023
Experience
  • The Chinese University of Hong Kong, Shenzhen
    The Chinese University of Hong Kong, Shenzhen
    Research Assistant, Advisor: Dr. Jinbo He
    May. 2022 - present
  • Shenzhen Research Institute of Big Data
    Shenzhen Research Institute of Big Data
    Research Assistant
    Jun. 2021 - Aug. 2022
Honors & Awards
  • Connaught International Scholarship for Doctoral Students (2025-2029)
    2024
Selected Publications (view all )
Fitting Item Response Theory Models Using Deep Learning Computational Frameworks
Fitting Item Response Theory Models Using Deep Learning Computational Frameworks

Nanyu Luo, Yuting Han, Jinbo He, Xiaoya Zhang, Feng Ji#

Preprint 2025

PyTorch and TensorFlow are two widely adopted, modern deep learning frameworks that offer comprehensive computational libraries for developing deep learning models. In this study, we illustrate how to leverage these computational platforms to estimate a class of widely used psychometric models—dichotomous and polytomous Item Response Theory (IRT) models—along with their multidimensional extensions. Simulation studies demonstrate that the parameter estimates exhibit low mean squared error and bias. An empirical case study further illustrates how these two frameworks compare with other popular software packages in applied settings. We conclude by discussing the potential of integrating modern deep learning tools and perspectives into psychometric research.

Fitting Item Response Theory Models Using Deep Learning Computational Frameworks

Nanyu Luo, Yuting Han, Jinbo He, Xiaoya Zhang, Feng Ji#

Preprint 2025

PyTorch and TensorFlow are two widely adopted, modern deep learning frameworks that offer comprehensive computational libraries for developing deep learning models. In this study, we illustrate how to leverage these computational platforms to estimate a class of widely used psychometric models—dichotomous and polytomous Item Response Theory (IRT) models—along with their multidimensional extensions. Simulation studies demonstrate that the parameter estimates exhibit low mean squared error and bias. An empirical case study further illustrates how these two frameworks compare with other popular software packages in applied settings. We conclude by discussing the potential of integrating modern deep learning tools and perspectives into psychometric research.

Generative Adversarial Networks for High-Dimensional Item Factor Analysis: A Deep Adversarial Learning Algorithm
Generative Adversarial Networks for High-Dimensional Item Factor Analysis: A Deep Adversarial Learning Algorithm

Nanyu Luo, Feng Ji#

Preprint 2025

Advances in deep learning have enhanced parameter estimation in item factor analysis (IFA), but traditional Variational Autoencoders (VAEs) often lack sufficient expressiveness. To overcome this, we introduce Adversarial Variational Bayes (AVB), which integrates VAEs with Generative Adversarial Networks via an auxiliary discriminator to relax the standard normal inference assumption. Building on this, we propose Importance‑weighted Adversarial Variational Bayes (IWAVB), which consistently achieves higher likelihoods and comparable mean‑square errors to Importance‑weighted Autoencoders (IWAE) in both empirical and simulated studies, and excels when latent distributions are multimodal. These results indicate that IWAVB can effectively scale IFA to large‑scale and multimodal datasets, fostering deeper integration of psychometric modeling and complex data analysis.

Generative Adversarial Networks for High-Dimensional Item Factor Analysis: A Deep Adversarial Learning Algorithm

Nanyu Luo, Feng Ji#

Preprint 2025

Advances in deep learning have enhanced parameter estimation in item factor analysis (IFA), but traditional Variational Autoencoders (VAEs) often lack sufficient expressiveness. To overcome this, we introduce Adversarial Variational Bayes (AVB), which integrates VAEs with Generative Adversarial Networks via an auxiliary discriminator to relax the standard normal inference assumption. Building on this, we propose Importance‑weighted Adversarial Variational Bayes (IWAVB), which consistently achieves higher likelihoods and comparable mean‑square errors to Importance‑weighted Autoencoders (IWAE) in both empirical and simulated studies, and excels when latent distributions are multimodal. These results indicate that IWAVB can effectively scale IFA to large‑scale and multimodal datasets, fostering deeper integration of psychometric modeling and complex data analysis.

All publications