Journal article
Journal of Applied Psychology, 2023
#iopsych #personality #psychometrics #quantmethods
Assistant Professor, Industrial-Organizational Psychology + Quantitative Methods
Department of Psychological Sciences
472 Sewall Hall
Rice University, MS-25
6100 Main Street
Houston, TX 77005 USA
#iopsych #personality #psychometrics #quantmethods
Department of Psychological Sciences
472 Sewall Hall
Rice University, MS-25
6100 Main Street
Houston, TX 77005 USA
APA
Click to copy
Fan, J., Sun, T., Liu, J., Zhao, T., Zhang, B., Chen, Z., … Hack, E. (2023). How well can an AI chatbot infer personality? Examining psychometric properties of machine-inferred personality scores. Journal of Applied Psychology.
Chicago/Turabian
Click to copy
Fan, Jinyan, Tianjun Sun, Jiayi Liu, Teng Zhao, Bo Zhang, Zheng Chen, Melissa Glorioso, and Elissa Hack. “How Well Can an AI Chatbot Infer Personality? Examining Psychometric Properties of Machine-Inferred Personality Scores.” Journal of Applied Psychology (2023).
MLA
Click to copy
Fan, Jinyan, et al. “How Well Can an AI Chatbot Infer Personality? Examining Psychometric Properties of Machine-Inferred Personality Scores.” Journal of Applied Psychology, 2023.
BibTeX Click to copy
@article{jinyan2023a,
title = {How well can an AI chatbot infer personality? Examining psychometric properties of machine-inferred personality scores.},
year = {2023},
journal = {Journal of Applied Psychology},
author = {Fan, Jinyan and Sun, Tianjun and Liu, Jiayi and Zhao, Teng and Zhang, Bo and Chen, Zheng and Glorioso, Melissa and Hack, Elissa}
}
The present study explores the plausibility of measuring personality indirectly through an artificial intelligence (AI) chatbot. This chatbot mines various textual features from users' free text responses collected during an online conversation/interview and then uses machine learning algorithms to infer personality scores. We comprehensively examine the psychometric properties of the machine-inferred personality scores, including reliability (internal consistency, split-half, and test-retest), factorial validity, convergent and discriminant validity, and criterion-related validity. Participants were undergraduate students (n = 1,444) enrolled in a large southeastern public university in the United States who completed a self-report Big Five personality measure (IPIP-300) and engaged with an AI chatbot for approximately 20-30 min. In a subsample (n = 407), we obtained participants' cumulative grade point averages from the University Registrar and had their peers rate their college adjustment. In an additional sample (n = 61), we obtained test-retest data. Results indicated that machine-inferred personality scores (a) had overall acceptable reliability at both the domain and facet levels, (b) yielded a comparable factor structure to self-reported questionnaire-derived personality scores, (c) displayed good convergent validity but relatively poor discriminant validity (averaged convergent correlations = .48 vs. averaged machine-score correlations = .35 in the test sample), (d) showed low criterion-related validity, and (e) exhibited incremental validity over self-reported questionnaire-derived personality scores in some analyses. In addition, there was strong evidence for cross-sample generalizability of psychometric properties of machine scores. Theoretical implications, future research directions, and practical considerations are discussed. (PsycInfo Database Record (c) 2023 APA, all rights reserved).