Title: Multidimensional item response theory applications with mirt and mirtCAT Author: Phil Chalmers Affiliation: York University Abstract: Item response theory (IRT) is a latent variable framework designed for modeling educational and psychological measurements. Popular representations of IRT include the Rasch, partial credit, graded response, and three-parameter logistic model. Today, IRT has become the state of the art in a variety of scientific disciplines ranging from large scale educational assessments, like the PISA study and standardized psychological testing, to the development of objective scales for measuring, e.g, the degree of physical impairment in medical research. The mirt and mirtCAT packages are currently being developed to help users analyze testing response data, study IRT models via Monte Carlo simulations, and implement real-time testing interfaces in R. The mirt package has been design as a general purpose IRT package for estimating item- and group-level parameters, diagnosing item and test misfit, detecting differential item and test functioning, modeling explanatory covariate terms, and scoring tests using latent variable modeling approaches. The focus of mirtCAT, on the other hand, is to provide tools for multidimensional computerized adaptive testing (MCAT) methodology by providing functions to generate graphical user interfaces, as well as to perform Monte Carlo simulations for MCAT designs. Both packages are based upon the same underlying framework, and provide a fluid workflow between collecting and analyzing item response data. During this presentation, I will briefly demonstrate various aspects of mirt for analyzing response data with IRT models, and give various live presentations of how MCATs can be build and summarized with mirtCAT. Several examples will be presented using real and simulated datasets, and future work for the packages will be discussed.