Using Large Language Models for Qualitative Analysis can Introduce Serious Bias
Our data scientist, Aditya Chhabra, and industry experts have authored an insightful paper. It explores the use of Large Language Models (LLMs) in social science research, specifically for analyzing open-ended interviews with Rohingya refugees. The study emphasizes the importance of caution due to potential bias in LLM annotations, suggesting that bespoke models based on high-quality human annotations may offer better accuracy and less bias.
Find the complete paper here: https://arxiv.org/abs/2309.17147